Quant tools gain acceptance, but key uses are ignored
Bloomberg wrote about active managers starting to employ quantitative strategies, describing the new trend of incorporating data and quantitative tools in the asset selection & valuation process. This is a logical trend, since, as I discussed previously, there many new data sources and quantitative tools being developed that can augment fundamental analysis. However, the real issue with active managers may not be their inability to find alpha, but rather their inability to capture the alpha they predict. There are shortcomings at many active managers in aspects of the investment process, other than alpha prediction, and these can be fixed by incorporating quantitative tools. I described many such flaws in “The seven deadly sins of active managers”, which concludes that most managers need to change their process. Put simply, almost all active managers should employ quantitative tools to integrate the portfolio construction and trading process with their research and asset valuation function. Unless a manager is extremely concentrated in its portfolios, allowing portfolio managers to operate separately from both the risk management and trading functions will lead to underperformance. This is due to the inability of human managers to pick optimal position sizes in the context of trading costs and market conditions, as well as excess transaction costs incurred when adjusting portfolios ex ante for risk. Thus, if you are an active manager or are evaluating active managers, I strongly encourage you to focus on the investment process when considering quantitative techniques. (If you want help with this, feel free to reach out to me at viablemkts.com.)
More Fear Mongering about HFT
Another misleading article about HFT, this time from MarketWatch, was published in the TabbForum. This opinion piece invokes fear of flash crashes, presumably to create headlines or to blame technology. What is interesting about this well-researched article is that there are many elements of truth to their story. The modern market is more susceptible to flash moves, particularly in markets other than Equities, which implemented the “Limit Up / Limit Down” regime to protect against extreme gyrations. The reality, as I pointed out in a research paper in 2013, is that a “feature” of the modern market is concentrated liquidity near the NBBO. This means that an order, which is too large for the available size in the market and too aggressive in the price it is willing to trade at, can cause the type of flash crash depicted in this article, as the available liquidity in the market drops precipitously after the price moves farther from the NBBO.
Unfortunately, the article fails to note that the most likely originator of such an order is a human being entering an order that is too large, either directly, or into a poorly designed agency algorithm that fails to properly size its order routing. (This is precisely the conclusion of the “flash crash report” written jointly by the SEC and CFTC.) Moreover, the article misses one of the primary causes of this pathology, which is that regulators, in their zeal to prevent “spoofing and layering” have actively discouraged firms from placing orders above and below the market. While the regulators clearly did not intend this outcome, the cause of this market vulnerability is the notion that high “order to execution” (or “cancel to execution”) ratios are indicative of layering or spoofing. Understand that trading strategies which place orders far from the NBBO are rarely filled, since they only get executed when overly large, aggressive orders get placed. As a result, firms wishing to employ such a strategy would have very large cancel to execution ratios. Since such an outcome is actively discouraged by regulators and the firm’s own compliance departments, firms have tended to abandon such strategies. As a result, the pathology observed by this article is completely predictable. It is not because of HFT, or the panic of market makers, but rather to the active discouragement of market making strategies that would buffer against “fat finger” orders.
Having said this, there is another important truth in this article, which is to explain the relevance of behavioral patterns to more enduring events. They cite Charles Mackay’s seminal work “Memoirs of Extraordinary Popular Delusions and the Madness of Crowds” which explains the tulip bubble, and was broadly applicable to the internet bubble at the end of last century. Such trends are very real, and can develop into powerful forces. It is even possible that too many computer algorithms operating with similar investment theses could become self-reinforcing in much faster timescales, leading to significant asset price moves. However, the notion that the madness of crowds has anything to do with flash events, which typically reverse as quickly as they occur, is tenuous at best. Articles such as this one, seem to be written with a conclusion in mind (in this case, fear of high speed trading and automation) and are willing to twist and stretch their logic in many directions to “prove” their point.
Are Index Investments Truly “Passive”?
The Wall Street Journal reports on the decision by S&P to exclude stocks without voting rights for shareholders. They explain how S&P and FTSE Russel have essentially rebuked Silicon Valley’s penchant for capital structures that concentrate voting power in the hands of founders. This story points out something else, however, which commentators, including Matt Levine have pointed out as well. While investing in index funds is more “passive” than active funds, the indexes themselves embed active decisions on what stocks to include. If it turns out that voting rights are predictive of excess returns, then the indexes will outperform the market and will underperform if the opposite is true. This, of course, is part of a more nuanced conversation about the pros and cons of different modes of index construction, but I will save that for another day.