Don’t Believe the Rules in Investing
Membership required
Membership is now required to use this feature. To learn more:
View Membership BenefitsAdvisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.
We would like to think that investing is a science, but, alas, it is not. Water freezes at 32 degrees Fahrenheit. Light travels at 186,000 miles per second. When you throw a ball into the air, gravity always brings it back to Earth. We look for similar rules in investing.
There are none.
Undeterred, we turn to academic research and data to provide answers. If investing is a probabilistic activity rather than a hard science, can we gain an advantage by studying the past? Does the past repeat itself sufficiently and regularly enough to give us a reliable edge?
We believe so.
We apply labels like “evidence-based” and “smart-beta” to our data-driven, historically dependent investment strategies to suggest their superiority. We infuse them with an air of precision using complex math and ratios named after semi-famous people. We cloak them with a patina of empirical validity using Greek and Latin terms and graphics that look back 100 years.
But are any of these strategies based on rules or principles that can be relied upon over reasonable future time frames to consistently produce superior results for investors? Or are we all just accepting investment risk and mistakenly, or perhaps intentionally, mislabeling our luck as skill?
Since we are so fond of data, research, and evidence, let’s look at some.
Equity fund performance
We’ll start in a familiar and comfortable place. The S&P SPIVA US 2019 scorecard shows:
- 70% of domestic equity funds lagged the S&P composite 1500 in 2019. 83% lagged the benchmark for the five years ending 2019 and 88% lagged for the decade ending 2019.
- 71% of large-cap funds underperformed the S&P 500 in 2019. 80% underperformed for the five years ending 2019 and 89% fell short for the decade ending in 2019.
- 68% of mid-cap funds beat the S&P mid-cap 400 in 2019, but 64% failed to beat it for the five years ending in 2019 and 84% failed to do so for the decade ending in 2019.
- 62% of small-cap funds beat the S&P small-cap 600 in 2019, but 77% failed to beat it for the five years ending in 2019 and 89% failed to do so for the decade ending in 2019.
The S&P persistence scorecard for 2019 shows:
- Only 3.84% of the domestic equity funds that performed in the top half of that fund group in 2015 performed in the top half every year through 2019.
- Only 21% of the domestic equity funds that performed in the top quartile of that group for the period 2010-2014 also performed in the top quartile for the period 2015-2019.
Fixed income fund performance
Fixed income funds fared no better. The S&P SPIVA US 2019 scorecard shows:
- 98% of government-long funds lagged their benchmark in 2019. 100% fell short for the five years ending 2019 and 99% fell short for the decade ending 2019.
- Over 95% of investment-grade long funds underperformed their benchmark in 2019. 98% underperformed for the five years and for the decade ending in 2019.
- 65% of high-yield funds lagged their benchmark in 2019. 94% underperformed for the five years ending 2019 and 97% underperformed for the decade ending 2019.
What do the academics say?
In 1997 Mark Carhart published On Persistence in Mutual Fund Performance. He found that persistence, to the extent it appears to exist, is the result of “chance.” His conclusion: “The results do not support the existence of skilled or informed mutual fund portfolio managers.”
In 2010, Eugene Fama and Kenneth French published a paper entitled Luck versus Skill in the Cross-Section of Mutual Fund Returns. Their search for evidence of manager skill was no more successful than Carhart’s. They concluded:
…if many managers have sufficient skill to cover costs, they are hidden by the mass of managers with insufficient skill.” “On a practical level, our results on long-term performance say that true alpha in net returns to investors is negative for most if not all active funds.
Is the endowment model the answer?
What about institutional investors? They use the “endowment model” popularized by David Swensen, the chief investment officer at the Yale endowment. They also have access to “big-boy” products like hedge funds and private equity. Does it matter?
Barber and Wang (2013) analyzed the returns earned by U.S. educational endowments for the 21 years ending in June 2011. They concluded: “…we find no evidence that the average endowment is able to deliver alpha relative to public stock/bond benchmarks.”
Dahiya and Yermack (2019) studied investment returns for thousands of endowment funds for the period 2009-2017. They found, “Non-profit endowments badly underperform market benchmarks, with median annual returns 4.46 percentage points below a 60-40 mix of U.S. equity and Treasury bond indexes, and statistically significant alphas of -1.10% per year.”
In October 2019, Markov Processes published its annual performance review of the Ivy League endowments. For the fiscal year 2019 all the Ivies, except Brown, underperformed the 9.9% return of a domestic 60/40 portfolio. They averaged 6.7%.
The Markov study found that over the past decade, from fiscal year 2010 through fiscal year 2019, the average Ivy delivered a 10.3% return, slightly underperforming the 10.5% annualized return of the 60/40 portfolio.
Ennis (2020) found that for the period 2009-2018, “A composite of large endowment funds underperformed a passive benchmark by an average of 1.6% per year, and a composite of public pension funds underperformed by 1%.” To that he added, “alternative investments proved to be a serious drag on institutional fund performance during the study period.”
Hammond (2020) examined 58 years of college and university endowment performance history. Again, “institutional quality” investment management fell short. The average endowment fund earned 8.1% after fees, while a 60/40 passive benchmark earned 9.1%.
Tactical allocators – Does “deftly switching” work?
It’s not just stock-picking and bond-buying mutual fund managers who have a hard time adding value. In 2019 Morningstar published a paper entitled Do Tactical-Allocation Funds Deliver?
The short answer was “no.”
Tactical-allocation funds seek to deliver better absolute or risk-adjusted returns by, according to Morningstar, “deftly switching exposure among asset classes.” Over the 15 years ending December 2018, “the average tactical allocation fund returned 3.4 percentage points annually, lagging Vanguard Balanced Index by 3.2 percentage points per year with similar risk.”
This underperformance was consistent throughout the period studied. “Vanguard Balanced Index beat the tactical-allocation category average in every rolling three-year period during the trailing 12 years through December 2018.”
The odds of finding a winning tactical strategy were bleak. “Only 9% of tactical-allocation funds that were around at the end of 2008 went on to survive and outperform Vanguard Balanced Index over the next 10 years.”
Tactical funds didn’t provide better downside protection either. “During the 15 years through December 2018, the average fund in the tactical-allocation category tended to lose 5% more than the Vanguard Balanced Index in months that the Vanguard fund posted negative returns.”
Morningstar concluded: “Tactical investing is hard and most who try it fail.”
Is market timing the solution?
How about market timing gurus? Do they fare any better? Here’s the evidence.
CXO Advisory Group, collected timing predictions made by 68 different market timing gurus between 1999 and 2012. You may dismiss market timing gurus as a bunch of quacks, but the group studied includes many names whose articles appear regularly in industry publications.
The data shows that the average guru made correct forecasts about 47% of the time, a little worse than they would have done if they simply flipped a coin to make their calls.
In an article published by the Brandes Institute in 2016, Wim Antoons surveyed the CXO Advisory Group data for the period 2005 through 2012 – a total of 6,582 forecasts. He found that, “after transaction costs, no single market timer was able to make money.”
What does and doesn’t this data tell us?
Some would say the “evidence” above shows that active managers of any kind cannot develop strategies that consistently produce superior results for investors. Fund managers, institutional managers, tactical allocators, and market timers are simply charlatans, fools, or bumblers.
The experience of tactical allocators and market timing gurus does suggest that predicting future market movements is close to impossible, at least in the short-term. Flipping a coin may be more effective than staring into a crystal ball, but neither approach is useful.
But what about the managers in the S&P/SPIVA study who were not trying to call the market, but were buying and selling securities based on their assessment of future risks and returns?
One thing the data tells us is that if you take a large and diverse group of active managers, all of whom charge a fee for their services, and lump them arbitrarily into broad categories, and compare them to a single index benchmark that includes no fees, most managers will fall short.
But there is much the data does not tell us about those managers:
- How many were actually trying to beat the benchmark?
- How many were more focused on risk control than raw returns?
- How many were investing only in a small segment of the broad category?
- How many were positioning their portfolios for the future rather than today?
- By how much did they fall short? The amount of their fees or less? By a lot?
- Are we comparing apples to oranges?
Maybe the S&P/SPIVA study is not evidence of massive manager failure at all, but is exactly the result you would expect if you herd a diverse group of managers with differing objectives together and try to assess their capabilities with a universal measuring stick?
We have the same issue looking at the performance of the endowment managers. As a group, they did not do well compared to a 60/40 portfolio. But what were their objectives? Did they have funding needs, spending constraints, or investment policies that make a 60/40 portfolio an irrelevant measuring stick? The data is silent.
Does skill exist?
Is there evidence of investment management skill in the data?
There are those who argue, ironically, that skill does not exist in the investment world because there are too many talented people all trying to find an edge. They are armed with the same tools and information, so none can prevail. In effect, they cancel each other out.
On the other hand, there are those who say that investment management is a commodity. This suggests that portfolios are interchangeable and can be stamped out on an assembly line like toasters or toothpicks by anyone with access to a computer. It’s so simple, anyone can do it!
Either investment management is unbelievably simple, or impossibly difficult. Neither of these views comports with what we see if we open our eyes.
What the data shows is that active management is hard work, but that skill exists. Consistently beating benchmarks is challenging, but far from impossible.
In reviewing the S&P/SPIVA data we naturally focus on the managers who failed to beat the benchmarks because the failure rate is so high. It is easy to overlook the cadre of managers who did beat the benchmarks.
If we look at equity fund performance, for example, we find that roughly the same percentage of managers beat the benchmarks over a decade as would have received an “A” grade on a final exam back in our school days. An even greater percentage beat the benchmarks over five years. That group would include most of the “B” students as well.
It’s harder to discern skill or a lack of it from the studies of endowment managers. Those studies report their conclusions in terms of the results achieved by the “average manager.” This obscures the performance of those who may have distinguished themselves among the thousands of managers examined. Their skill is buried in an amalgam of data.
Nevertheless, the S&P/SPIVA data exhibits a pattern that we see in every complex area of human endeavor. Take law, medicine, sports, music, and acting as examples. There is always a wide spectrum of competency with exceptional performers usually occupying a relatively narrow band on that spectrum.
The exceptional performers do not win every case, save every life, or win an MVP award, Grammy, or Oscar every year. They are not necessarily exceptional every day or every month. They have slumps, losing streaks, and times when their ideas or efforts fall flat. But over time, they stand out, even though they compete intensely with other very talented people.
This is consistent with what we see when we examine any large manager database. There is always a group of managers who have had extended periods of strong performance. You also find wide dispersion in the performance of managers in every category you examine. They don’t “win” every quarter or even every year, but some rise above the others.
We even see this among robo-advisors, who use similar investment vehicles and styles to create portfolios algorithmically. Read any issue of Backend Benchmarking’s Robo Report. There are winners and there are losers, and there is wide dispersion among them. Yet, on average, they fail to beat a passive benchmark.
What about the Carhart and Fama/French studies that suggest outperformance is the result of luck, not skill? First, numbers are not always well-suited to capturing the truth. Intangibles like skill, judgment, experience, craftsmanship, and artistry are hard to measure mathematically.
Second, it’s important to understand the basis for the conclusions in those studies. The researchers, of course, have no way to prove whether a manager is skillful or not. They don’t meet with the managers, test them, or examine their investment processes.
Instead, they use statistical methods to compare the results that a large group of managers achieved with the distribution of outcomes they would expect from pure chance. They could be observing luck, or they could be observing skill. The researchers can make inferences from their observations, but they cannot definitively tell if a given manager is lucky or good.
If we look objectively at the evidence, including the S&P/SPIVA study, we see too many examples of managers who have succeeded over long periods to believe they are all lucky. At the same time, there are too many failures and too much dispersion among managers to think that investment management is a simple commodity.
If skill exists, why is it so rare? Why is it so hard to identify skilled managers before they demonstrate that skill with a period of excellent performance? Why do apparently skilled managers appear to lose their touch after a period of great performance?
The nature of the problem
Statistician George E. P. Box once said: “All models are wrong, but some are useful.” The models we use to explain the financial markets are no exception – they are all wrong.
For 30 years investment managers have been relying on models like CAPM, the efficient market hypothesis, and modern portfolio theory despite their obvious and well-documented flaws.
We’ve used other models to help us estimate the expected return on the market and the future value of financial assets – an exercise that Robert Merton characterized as a “fool’s errand.” These models often produce results that prove that Merton was right.
Fortunately, there is enough truth in these models to help us carve out a framework for understanding markets and developing rational approaches to portfolio management. However, we should not mistake our models for reality. They can be useful if we recognize their limitations, but they do not provide formulas for success. Investing is not science.
Newer models like MIT Professor Andrew Lo’s adaptive market hypothesis, and the work of behaviorists like Nobel laureates Daniel Kahneman and Richard Thaler help us appreciate the role that non-rational factors play in market behavior. But this understanding has not translated into insights that give us a consistent and predictable performance edge.
More generalized models, like chaos theory and complex adaptive systems, further advance our understanding of how and why markets behave the way they do. However, none of these models have been shown to advance our predictive capabilities either.
On the contrary, these models suggest that financial markets are simply too complex to predict. They are not fixed systems that obey established rules. Rather, they are dynamic systems driven by the actions of millions of independent agents intensely competing with one another.
Each agent has its own goals, needs, motives, capabilities, and strategies. They adjust as new information becomes available. So, markets reflect not only our rational and emotional aspects, but also our evolutionary and adaptive nature. You can’t step into the same river twice.
It is not enough to look back, make observations about the past, and apply the lessons learned going forward. The world we experienced yesterday is not the world we will experience tomorrow. Our understanding of how markets will behave in the future is always imperfect.
Here are three examples.
We are shooting at a moving target
In 1992, Fama and French identified the size and value premiums in their paper, The Cross-Section of Expected Stock Returns. Their research, which covered a relatively short 28-year period from 1963 through 1990, launched the factor-based approach to investing.
Now, 28 years after the publication of their paper, how have investors who relied on the existence of these premia fared? Neither size nor value have provided excess returns over the 28-year post-publication period.
From January 1, 1993 through June 30, 2020, the Russell 1000 large-cap index produced an annualized return of 9.63%, while the Russell 2000 small-cap index produced an annualized return of 8.53%. The Russell 1000 growth index returned 9.76% annually compared to a return of 8.99% for the Russell 10000 value index.
In more recent years, those premia have not provided investors with any benefit either. Over the one-, three,- five-, and 10-year periods ending June 30, 2020, the Russell 1000 large-cap index has outperformed the Russell 2000 small-cap index and the Russell 1000 growth index has outperformed the Russell 1000 value index.
Are the value and small-cap premia dead? There are longer and shorter time periods where both appear alive and well, so it’s unlikely we have seen the last of them. But as markets react to input from millions of participants and assets flow to those strategies, we should not expect a reprise of the past.
Smart yesterday, not quite as smart tomorrow
In September 2020, Shiyang Huang, Yang Xiang, and Hong Xiang published a paper entitled The Smart Beta Mirage. They compared the performance of “smart-beta” indexes with the performance of ETFs that track those indexes after their launch.
Here’s what they found: “[T]he return of smart-beta indexes drops from 2.77% per year ‘on paper’ before ETF listing to -.44% per year after ETF listing.” “Our study shows that smart beta indexes can only outperform the aggregate market in backtests before ETF listings.” “[S]tellar performance only exists in backtests and has no indicative power for ‘real’ performance.”
We should never assume that our observations about the past will translate into rigid rules about the future, no matter how well documented. Nor should we assume that past patterns will not be resurrected later in a somewhat altered state.
The spotlight effect
Years ago, when David Swensen described his success with the endowment model in Pioneering Portfolio Management, hedge funds played a major role. Why did they work so well for Swensen back then and add so little value more recently?
Richard Ennis explained this by examining alternative investments over different time periods.
During what he called the “Golden Age of Alternative Investments” from 1994 to 2008, large endowments, on average, produced excess returns of “410 basis points a year for 15 years.” He attributed this stellar performance, in part, to the use of alternative investments.
This caused what he characterized as “a flood of money” into private markets and hedge funds. According to Ennis, hedge fund assets increased 27-fold between 1997 and 2018 and private equity assets increased 37-fold between 1994 and 2019.
As a result, Ennis says, “pricing in those markets became better aligned with public-market pricing, and those alternative markets became more efficient.” He found that from 2009 through 2018, 99% of the performance of both endowments and public pension funds could be explained by an attribution model consisting only of public stock and bond indexes.
Ennis also found that standard deviation of returns for the institutional composite and the passive benchmark “…are nearly identical. At 11.10% and 11.16%, which is to say there is no evidence of ‘volatility dampening’ in the return series of the alts-heavy composite.”
The alts market evolved and one of the endowment model’s chief advantages evaporated.
What should we do?
Famed investor Seth Klarman was once asked if investing was more art, science, or craft. He replied: “I would say art first and foremost, craft second, science third.”
He reminds us that investing can be facilitated by science, but markets are living, breathing, reflections of human activity and emotions. They don’t move in neat, regular cycles like the tides or the seasons. They are quirky and unpredictable, just like we are.
Nothing works all the time. Market participants react and adapt. The impact their actions will have is never certain, and this puts a premium on remaining diligent, flexible, skeptical, and cautious. Tomorrow will not be like yesterday. Markets don’t behave according to fixed rules like objects in the physical world.
We must also put data in its place and not give it more credit than it deserves. It is flat and one-dimensional, like an x-ray. It is not a complete representation of the markets. It cannot tell us precisely what will happen in the future, but can help us define the range of possibilities. Even those boundaries are fuzzy and changeable. We are always playing the odds.
More importantly, data cannot tell us when events will transpire. As John Maynard Keynes once said, “the markets can remain irrational longer than you can remain solvent.”
We should respect and be curious about academic research, but we should recognize its limitations. In 2014 Robert Novy-Marx published a paper, Predicting anomaly performance with politics, the weather, global warming, sunspots, and the stars. In it, with tongue in cheek, he applied standard academic research techniques to determine the predictive power of a range of variables on market returns and investment anomalies.
He concluded: “Predictive regressions find that the party of the U.S. president, the weather in Manhattan, global warming, the El Nino phenomenon, sunspots, and the conjunctions of the planets all have significant power predicting the performance of popular anomalies.”
He cautioned: “readers may be inclined to reject some of this paper’s conclusions solely on the grounds of plausibility. I urge readers to consider this option carefully…as doing so entails rejecting the standard methodology on which the return predictability literature is built.”
We should not deny what we know to be true about the world just because it cannot be proven or disproven using standard statistical measures. Remember, those tools come from the same toolbox used by Novy-Marx to prove the predictive value of sunspots.
So, let’s take off our white coats and stop pretending to be scientists. Our goal is not to win a prize at the science fair. Our goal is to help our clients reach their financial goals.
We will not succeed if clients don’t trust us. Tell your clients you don’t have all the answers. They will find this out soon enough anyway. Teach them that investing is a probabilistic exercise that requires knowledge, discipline, patience, and, yes, sometimes a bit of luck.
The world of investing is rich with subtlety and full of mystery. We are guides, not scientists. One of the most important characteristics of a good guide is to have a detailed understanding of the terrain over which you will journey and share it with those you lead.
If we truthfully describe the nature of the journey to our clients and our role in it, and prepare them for the experience they will have, they will follow us even through the toughest times.
Water doesn’t care what we do – it still freezes at 32 degrees. Light and gravity don’t care either. Markets, on the other hand, are human creations that reflect the best and worst of our nature and evolve over time, just as we do. This is a lesson we must teach our clients.
Scott MacKillop is CEO of First Ascent Asset Management, the first TAMP to provide investment management services to financial advisors and their clients on a flat-fee basis. He is an ambassador for the Institute for the Fiduciary Standard, the winner of the Investments & Wealth Institute’s 2019 Governance Insight Award, and a 45-year veteran of the financial services industry. He can be reached at [email protected].
Membership required
Membership is now required to use this feature. To learn more:
View Membership Benefits