Here We Go Again: Merton Share and Why I Don’t Use Retirement Calculators

William BernsteinThe views presented here do not necessarily represent those of Advisor Perspectives.

Remember portfolio mean variance optimization (MVO)? Unless you’re of Medicare age, you might not, but in the late 1980s, it was all the rage.

In 1952, Harry Markowitz published an algorithm that computed the “efficient frontier” of portfolios that delivered the highest return at a given degree of volatility (which he defined as portfolio variance, the square of standard deviation), or, conversely, the lowest amount of variance at a given degree of return.

For decades after that article’s original publication, the algorithm’s laborious matrix algebra lay beyond the reach of ordinary practitioners, but by around 1990 money managers in search of the MVO fountain of financial youth were eagerly feeding its three inputs – returns, variances, and correlations – into their shiny new desktop computers. The outputted portfolios, thick with the equities of Japanese companies, gold miners, and microcaps, underperformed miserably.

The next fad in quantitative portfolio management was Value at Risk (VaR), which assessed, by a variety of methods, the probability of an asset’s encountering a loss of a given size within a given time frame. VaR achieved near-cult status among financial practitioners, just in time to fall apart during the 2007–2009 financial crisis.

Why was that? Garbage in, garbage out: In the case of MVO, because the algorithm’s outputs are exquisitely sensitive to its inputs, changing an asset’s estimated or historical return by a percent this way or that leads either to its dominating the portfolio or disappearing entirely from it. Wags soon named the new financial engineering fashion the “error maximizer.” Similarly, the inadequacies of VaR’s underlying historical database rendered it worse than useless during an honest to God meltdown.