Nobelist William F. Sharpe, speaking at a CFA Institute Annual Conference last year, said, “When I hear smart beta, it makes me sick.” And yet its popularity has swept not only the ETF universe but academia too. According to a paper by Duke University professor Campbell Harvey, hundreds of academic papers have been published about the “factors” that underlie smart beta strategies. Wharton Research Data Services Research Director Denys Glushkov identified 164 U.S. domestic equity smart beta ETFs during the 2013-2014 period.
What does smart beta mean? Does it deserve the attention it is getting from the market and academia?
The origin
In 1992 Eugene Fama and Kenneth French published an article in The Journal of Finance titled “The Cross-Section of Expected Stock Returns.” In the article, Fama and French ran regressions of stock portfolios against three variables, the market’s return as a whole, returns on small stocks and returns on value stocks. In these regressions, they found substantial dependencies on two of the factors in the regression, small stock returns and value stock returns – more than on the market as a whole.
After the article came out, the late, revered Fischer Black, who would have won a Nobel Prize for the Black-Scholes formula had he not died in 1995, attacked it viciously, calling it the product of “data mining” – a pejorative term that can be applied to most of the evidence that has been advanced for the superiority of investment strategies that have been shown to exhibit outperformance based on historical data.
It should be noted that running a multiple regression is an exceedingly easy thing to do. It can take a researcher literally less than ten seconds of her time to run a multiple regression on numerous variables in an Excel spreadsheet if the values of those variables have already been entered in the columns. This ease invites overuse of the procedure; indeed it is massively overused.
By running regressions or backtests multiple times on the enormous amount of historical stock market data that exists, a researcher cannot fail to validate some investment strategy or statistical relationship, in order to sell an investment product or write a paper. But the apparent validity of the investment strategy or statistical relationship is often spurious, an accidental concurrence of random data discovered through intensive data mining. The common phrase often used to refer to this phenomenon is, “If you torture the data hard enough, it will confess to anything.”
One remedy that has been proposed for promiscuous data mining is to have a reasonable theory firmly in mind before turning to the data to seek statistical evidence to support it. “Lack of theory,” said Black in his critique of the Fama-French paper, “is a tipoff: watch out for data mining!” Black went on to say of their paper, “I especially attribute their results to data mining when they attribute them to unexplained ‘priced factors,’ or give no reasons at all for the effects they find.” In fact, proposed reasons for the Fama-French findings had to be conjured up only after the publication of their 1992 paper and a follow-up paper in 1993. They admitted that they could not offer any sound economic reasons for them in the papers themselves.
Black’s criticism notwithstanding, the two Fama-French papers went on to be routinely referred to as “seminal” by a generation of financial academicians and quantitative practitioners whose level of data mining for “factors” would make Black spin in his grave. What Black would consider data mining these researchers now consider “evidence-based financial economics.” Thus, we have now not only the two factors (three with the market as a whole) that were identified by Fama and French – and evidence for one of those two, small capitalization, has been acknowledged to have faded over time – but what has been called a “factor zoo.”
Some seminal works spawn productive fields; others unfortunately do the opposite.
The precedents
Fischer Black may have been a little unfair. The small stock effect had been known for quite some time in 1992, so it was no surprise to find that portfolios would have historically had higher returns the more they resembled small stock portfolios. In the late 1970s David Booth, in the process of launching Dimensional Fund Advisors, would show anyone who would look at it a graph of the growth of a dollar over time in a small stock portfolio vs. its growth in the S&P 500 (the small stock portfolio soared by comparison). The data was provided by Rolf Banz, who would publish in 1982 an article showing that small stocks had experienced a significant risk-adjusted return over the period. This small-cap alpha is generally acknowledged to have dissipated since the publication of Banz’s article, but whether this alpha existed even over the time period that Banz measured has been contested.
Thus, in fact, Fama and French had not a theory, but a precedent to spur them to the investigation that they conducted. Data mining may not be a fair accusation of that work. Nevertheless, it had all the trappings of data mining: lack of a theory; minute attention to the details of the construction of the data set including many apparently arbitrary operations; and an outpouring of statistical results tabulated in a series of hard-to-read tables. All of these have become signatures of the work spawned by Fama/French’s seminal papers. And all of this is considered to be the hallmark of the new evidence-based financial economics. “Evidence-based financial economics” bears too uncomfortable a resemblance to data mining. The biggest problem with data-mined results from historical data in investment finance is that they will not persist in the future.
The theory and the practice
A theory of sorts has been backfitted to the Fama/French results, and a procedure to apply that theory to equity portfolio construction. But both the theory and the practice are seriously deficient.
The theory is that equity asset classes do not capture the granularity of the characteristics of stocks. This interpretation is an offshoot of the incessantly repeated observation that the correlation among investment asset classes increased in the financial crisis of 2008-2009. Hence, the investment world, engaging in the usual practice of trying to close the barn door after the horse has been stolen, has sought new ways to lower correlations.
The new claim is that asset classes are like “molecules,” or like food groups, in that they are aggregates of more fundamental components: atoms in the case of molecules, nutrients in the case of food groups and “factors” in the case of equity asset classes. Supposedly, if you granularize the equity market sufficiently, you can construct sectors or “factors” that have low correlations.
This is an analogy that doesn’t really work. You have to be an ardent believer in the religion to see it. If small-cap is a factor – well, small-cap is also an asset class. How is the small-cap factor more basic than the small-cap asset class – an atom to its molecule? If value is a factor, how is that more fundamental than the value asset class? True, new factors have been introduced since Fama-French – an increasingly large number of them, as Harvey (who identified 315 of them) indicated – and some of those, like profitability, don’t coincide with traditional equity asset classes. But the more factors there are, the more the whole exercise becomes one of data mining.
Some adherent of the factor school could probably provide a reasonable-sounding explanation for this analogy by pointing to the factor models. But here, we find an even bigger problem. Those models simply don’t work in practice. This is not a new problem; mean-variance optimization, the famed Markowitz methodology for finding portfolios that are on the risk-return efficient frontier, doesn’t work in practice either. William Bernstein noted of mean-variance optimization:
“…there is a large and ugly fly in the ointment -- the technique works only in retrospect. It turns out that the outputted portfolio compositions are exquisitely sensitive to even very small changes in the input data. Change a few pieces of the input data slightly and the resultant portfolio compositions change drastically. Since the required input returns, SDs, and correlations are known with precision only in retrospect, mean variance optimization is worthless as a predictor of future optimal portfolios. This is because it is impossible to predict with anywhere near the required accuracy the returns, SDs, and correlations.”
Portfolio construction using factors uses the same quadratic programming algorithm as mean-variance optimization. But when applied to portfolio construction with factors, the end result of the allocation to factors is still not a portfolio; you now have to construct one, either by using asset classes, or using individual stocks. And that transition from a factor allocation to one that is actually investable is “practically challenging,” as a CFA Institute article by Eugene L. Podkaminer notes. It involves the application of algorithms that are as arcane and questionable and can involve as many arbitrary assumptions and inputs, as mean-variance optimization itself. Furthermore, the optimal allocation may call for short sales of stocks or asset groups. In the end, like mean-variance optimized asset allocation, the whole process must be jury-rigged to produce results that are acceptable and can actually be applied.
What will happen when all is said and done and this process, supposedly dictated by sophisticated mathematical modeling, is completed? Like asset allocation, it will produce a result that will adhere to the creator’s predilections. In the case of the typical asset allocation process, it tends to be engineered to closely resemble a total market equity index fund, together with whatever admixture of fixed income the creator has decided is appropriate for the risk-tolerance level of the particular investor. In the case of the typical factor allocation process, it will tend to be engineered to be like a total market equity index fund but with a somewhat higher allocation or allocations to asset classes or “factors” that the creator believes in.
In short, the factor allocation process, like asset allocation, is engineered to produce whatever portfolio suits its designer’s prejudices.
His lips say it’s beta but his eyes say it’s alpha
There is an additional problem with the factor allocation process. It is tied up with total confusion – and, I must say, deception – about what exactly smart beta offers to investors.
There is an intensive academic debate about whether smart beta – i.e. allocation using factors – produces alpha, risk-adjusted performance or only beta, a premium for bearing risk. The debate is complicated by the fact that, of course, if factors offer alpha, then if allocation to those factors is promoted too widely they will not work anymore.
Suppose, for example, that small value stocks are defined to be those that comprise 5% of the dollars invested in the stock market. Suppose next that marketing campaigns for small value funds quickly attract 10% of the dollars invested in the market. Obviously, for 10% of the dollars to be invested in those stocks that comprise 5% of the dollars currently invested in the market, the prices of those stocks must rise so that they comprise 10% of the market. If they do, as Clifford Asness et al. cunningly note in an article that defends value investing, then the returns of those who have already invested in value stock funds will benefit by seeing their prices rise. Asness et al. then add, “Unless you believe the returns have already gone away (which, of course, we do not)” … but what if they’re wrong?
If it’s all beta – expected reward for rationally taking risk – then those who prefer taking more risk by tilting toward riskier factors, such as, presumably, value, must be balanced by those who prefer taking less risk and therefore tilt away from those factors.
If that were the case then the factor allocation process would offer one portfolio, tilted toward value, for the risk-loving investor, and another, tilted away from value and toward lower-expected-return growth, for the risk-averse investor.
But the typical factor allocation process doesn’t do that. It doesn’t identify the risk level that the investor can tolerate on the factor-efficient frontier and then produce a portfolio to suit. It could do that, but that seems not to be how the factor allocation process is typically conducted. Instead, it uses the quadratic programming algorithm to maximize the Sharpe ratio – which will produce only one portfolio on the efficient frontier, for all investors. Maximizing the Sharpe ratio looks suspiciously like producing alpha – risk-adjusted outperformance – not beta, mere reward for risk.
I say deception because this all accords with the time-worn way in which investment advice, consulting, and management are sold for extraordinarily high fees – and that goes even for the relatively low-fee smart beta funds, as compared with the ultra-low fees for ordinary cap-weighted passive total market index funds.
In my 2007 book The Big Investment Lie, I relate an experience that a colleague at my firm had when he went to visit a Certified Public Accountant, whom we were trying to recruit to get into the investment advice business (my firm provided, for a percentage of assets, the tools to enable an advisor to do that). My colleague described the fee structure to the CPA, in which the CPA would typically receive one percent of the client’s assets. My colleague used the example of a client with $3 million in investment assets. He said that after he explained this to the CPA, they went to dinner. During the dinner, the CPA suddenly got a light in his eyes, and said, “One percent of $3 million? That’s $30,000! I’ve never been able to get more than $3,000 from that account.” My colleague knew then that the CPA was hooked.
How does an advisor get a client to pay $30,000 a year when he’s never been able to get that client to pay more than $3,000 for services like tax preparation? Because the client believes the $30,000 will be more than made up for by better investment performance without risk. This is the deception that most investment professionals nourish – tacitly if necessary, no matter what they say – and the factor dodge is no exception.
What is diversification?
However contentious the other benefits of factor investing may be, its exponents always fall back on one benefit – the “benefit of diversification.” How then can a portfolio allocated by the convoluted factor allocation process provide more diversification than any other portfolio, allocated by some other process?
No matter how the inputs to the efficient portfolio construction process may be snatched out of the air or from reverse engineering, they must produce the efficient frontier envelope, above which no portfolio can fall. Do the factor enthusiasts claim that by using factor allocation they can somehow make a portfolio come out above the traditional (if only conceptual) efficient frontier?
Perhaps their claim is that by shorting some stocks (or using derivatives) they can do that. But then, if one has any belief in even crudely efficient markets, shorting stocks should in general produce a negative expected return – and the model’s inputs should reflect that expectation. Nevertheless some papers, like that of Asness et al., claim a positive net benefit from shorting stocks in combination with other portfolio features. This can only be the result of historical data mining – it cannot be expected going forward, even if, as I say, one’s expected return projections factor in only crudely efficient markets.
The repetition of “benefit of diversification” is an example of a claim that is never well-defined enough to confirm or refute. Whatever it is, I cannot see how it is possible to have any more diversification benefit if one uses a factor approach to constructing a portfolio than if one uses any other approach.
It could have made sense if they’d just stopped there
Bonds have subclasses that are clearly different in risk and return. Their risks and expected returns can be roughly categorized with adequate granularity by maturity, coupon and quality. It probably wouldn’t make sense for everyone to buy the market-weighted total world (or U.S.) bond portfolio. Some will prefer less risky high quality short-term but low-yield bonds, others more risky lower quality long-term but high-yield bonds.
The same could be true for equities, though it’s harder to characterize the risks and expected returns. Some might prefer higher quality lower-expected-return sectors, others lower quality higher-expected-return sectors. The ones who prefer lower quality higher-expected-return might tilt their equity portfolio toward those sectors. The ones who prefer higher quality lower-expected-return might tilt toward those or blend a total market equity portfolio with a lower-risk asset class like bonds.
Trying to do rocket science or pretending to do rocket science or even believing you’re doing rocket science doesn’t really help. It will just pull the wool over clients’ eyes and maybe even your own, if you’re lucky. If you’re not lucky you have to live with the knowledge of your deception; some people seem not to mind, and some revel in it.
In any case it’s not good for the investor who has to pay the bill for all of this pretense.
Michael Edesess, a mathematician and economist, is a visiting fellow with the Centre for Systems Informatics Engineering at City University of Hong Kong, a principal and chief strategist of Compendium Finance and a research associate at EDHEC-Risk Institute. In 2007, he authored a book about the investment services industry titled The Big Investment Lie, published by Berrett-Koehler. His new book, The Three Simple Rules of Investing, co-authored with Kwok L. Tsui, Carol Fabbri and George Peacock, has just been published by Berrett-Koehler.
Read more articles by Michael Edesess