# 1015 – Uncertain Market Making

Tl; dr: paper and slides.

Market makers are critical in financial markets: They provide liquidity to facilitate resource relocation (timely and cheaply) and they price the securities to reflect information. This process of market making is subject to many frictions in reality. Among many other issues, what happens when some/all market makers are absent? One step back, what if some market makers know that their competitors—who help relocate resources timely and cheaply and contribute to price discovery—might not be there?

To make the case, consider an example from O'Hara (2015), who in turn cites Berman (2014). There are 14 exchange-traded product linked to gold in the U.S. These 14 securities implies 91 ($$=14\times(14-1)/2$$) distinct pairs of arbitrage relationships that must be monitored continuously in time. This is a nightmare for market makers covering products related to gold as s/he needs to monitor, in continuous time, all these securities for all market events—quote submission, revision, trade execution, etc.

So, the obvious implication seems to be that market makers are unable to be continuously actively monitoring the market. Then what?

In a recent research project, I explore the implications of such uncertain market making. The starting point is a standard theoretical model of trading, building upon the seminal work by Kyle (1985). Relaxing the assumption of perfectly competitive market making, I instead assume strategic market makers whose presence is not deterministic. Interestingly, with this small change in the assumption, the resulting rational expectations equilibrium involves a symmetric mixed strategy by market makers, who all choose to price the incoming order flow randomly. The following figure illustrates the main theoretical finding.

Order flow is priced randomly in equilibrium.
(Figure 2(d) in the paper)

The figure presents how the expected price to pay (vertical axis), per unit of order flow, varies when the information-to-noise ratio changes (horizontal axis). The dashed line (in the middle, labeled “$$\sigma_V/(2\sigma_U)$$”) indicates the baseline Kyle solution—as the order flow gets more informed (higher information-to-noise ratio), the per unit cost of trading increases. However, given market making uncertainty, this competitive solution is not going to be achieved. The realized pricing for the order flow is going to fluctuate randomly within the shaded (pink?) area. The blue, solid line (labeled “$$(1+\zeta)\lambda$$”) is the expectation, and clearly, because of uncertain market making, this expected cost is higher than the baseline. (The parameter $$\zeta$$ reflects the expected markup that market makers strategically charge because of their competitors' uncertain presence.) Worse is that because of the increased trading cost, investors are going to scale back their trading aggressiveness. As such, less information is impounded into price and the efficient component in the price is lower than the baseline (indicated by the dot-dashed line labeled “$$\lambda$$”).

I then take the theory seriously into data and ask how to empirically measure the magnitude of the random pricing of order flows. Relying on the beautiful literature of empirical market microstructure, I designed a state space model that can be readily estimated via a novel generalized method of moments. The estimation results shows that in 2014, the average U.S. stock faces a random markup around 17 times larger than the short-run price impact. To put this number into perspective, consider a buy of \$10,000 notional. On average in 2014, this will move the stock price up by 1.06 basis points in one second—the short-run price impact—on average. However, this price impact has a dispersion of around 17 times: The same trade could likely generate a price impact of 17 basis points. This dispersion is smaller for large stocks (12 times) than small stocks (22 times). Worryingly, back to the early 2000s, the dispersion was only about 2 times, which is shown in the following graph.

Dispersion of order flow pricing: 2000-2014.
(Figure 8(a) in the paper)

The empirical analysis is descriptive in nature and, hence, does not offer any causal story. Nevertheless, should the theory of uncertain market making stand, I suspect that the increase in the order flow pricing dispersion is largely due to the recent trend of “rise of machines”. On the one hand, technology improvement offers faster access to the market and hence traders are able to process more information in any given time interval. However, speed is a double-edged sword. The fact that everyone is now able to process information faster means that everyone is also able to react to information faster. In any given short time interval, all market participants are able to generate more information, and the amount of such “new” information grows quickly with the number of participants gaining speed. Overall, the market monitoring efficiency depends on the net effect of both edges. The above figure, in light of the theory I developed in the paper, seems to suggest that in the short-run (e.g., intra second), the net effect is negative—worryingly, pricing efficiency deteriorated for order flows over the decade, especially after 2007.

##### References
• Berman, Gregg E. 2014. “What Drives the Complexity and Speed of our Markets?” Speech.

• Kyle, Albert S. 1985. “Continuous Auctions and Insider Trading.” Econometrica 53 (6):1315–1336.

• O’Hara, Maureen. 2015. “High Frequency Market Microstructure.” Journal of Financial Economics 116 (2):257-270.

17:38:52, Nov 18, 2015
bzy@Fontainebleau