How Many Monkeys Does it Take to Find a Successful Strategy?
Give a monkey enough darts and she will eventually hit the bulls-eye on a dartboard. We wouldn’t dare consider that monkey an expert dart thrower, but investment professionals have been using essentially that same logic to assert that their strategies – often called “smart betas” – will outperform the market. New research exposes the faulty mathematics upon which such claims are based.
Early this month, four academicians — David H. Bailey and Marcos Lopez de Prado of Lawrence Berkeley National Laboratory, Jonathan M. Borwein of the University of Newcastle in Australia and Qiji Jim Zhu of Western Michigan University — posted a paper on the Social Science Research Network saying deservedly harsh things about backfitting abuses in investment management. “Recent computational advances allow investment managers to search for profitable investment strategies,” the authors wrote. “In many instances, that search involves a pseudo-mathematical argument, which is spuriously validated through a simulation of its historical performance (also called a backtest).”
They feel strongly enough about these abuses to write: “We would like to raise the question of whether mathematicians should continue to tolerate the proliferation of investment products that are misleadingly marketed as mathematically founded.”
Their statements apply, however, beyond conventional backfitting methodology. Several prominent recent claims for investment strategies have been based on the same one-two punch that the authors decry: a pseudo-mathematical argument combined with historical performance that may or may not be repeated in the future.
We will explain the backtesting addressed in their paper. Then we will consider arguments in favor of “smart beta,” which were bolstered by a widely read article in the July 6 issue of The Economist.
The passive superego tussles with the active id
It is a human tendency to assume that past trends can be extrapolated into the future, but it is widely known that this assumption is not valid when it comes to investment returns.
Nevertheless, many claims for particular investment strategies do present past returns. Investors either can’t let go of the intuition that past returns must predict future returns, or they don’t know that this has been disproven. For example, almost all mutual fund advertising is based on past returns data.
But many investors’ superegos, which know that past investment history is not predictive of future returns, have been overwhelming the ids, which believe past history is predictive. This can be seen in the mounting popularity of passive market-weighted index funds.
To continue selling investment products on the basis of backtests of past history, marketers sometimes try to buttress that history with mathematical-sounding arguments that claim there’s a theoretical reason why history should repeat itself. In almost all cases however, those arguments are only pseudo-mathematics.
Failing to test out-of-sample
Bailey et al. first define in-sample (IS) and out-of-sample (OOS) testing. IS refers to the performance of a strategy in the data sample used to design the strategy. OOS performance is for a data sample that is not used in the design of the strategy. It is common to retain a data set for OOS tests – for example, to divide 10 years of data into two five-year periods to use one for the IS design of the strategy and the other for OOS testing.