Home | Asset Allocation | Most Popular Mutual Funds | Advisor Commentaries | Subscribe | About Us | About the Data | Archives | Advertise
 


When Will Objectivity Enter the
Active vs. Passive Debate?

David B. Loeper, CIMA®, CIMC®
August 19, 2008

Go to page Previous, 1, 2     Email Article   Display as PDF


We start with a universe of 12,039 funds and share classes that have six years of data ending June of 2008 to focus on Howard’s recent alpha surge. Instead of arbitrary labeling of “domestic anything” funds relative to the S&P500, we apply two criteria to benchmark the funds to help avoid some of the inherent misclassification and arbitrary mislabeling. One is the macro holdings: i.e. under our rules, an equity fund must have at least 70% in equities, a balanced fund must have at least 25% in both bonds and stocks, and a fixed income fund must have at least 70% in bonds, etc. Granted, this will eliminate alpha created by radical asset allocation skill AND luck, but Howard eliminates those too.  So give me a pass on that since I objectively admit this is nothing other than useless past data. Second, we benchmark the fund against its best fit of 31 sub asset classes in an attempt, for example, to avoid giving false kudos to a mid cap fund against the S&P500. This isn’t rocket science, just rather basic common sense. It is benchmarking against the best fit style if the macro asset class holdings fit. The fundgrades website lets you grade funds against any of the 31 sub asset classes if you don’t like how our screening criteria benchmarks the fund.

What does the six years of data show? Not surprisingly (unless you are fooled by Howard’s easy bar to beat) 70% of the funds underperformed their best fit sub asset class. So what. Keep in mind this is all funds against 31 sub asset classes. Of the mere 618 funds and share classes whose best fit was the S&P500, 76.7% underperformed. Now there is some surging alpha!

Capturing the “newer fund” alpha supposedly identified by Howard’s “research,” of 12,039 funds in their first three years, 70.43% underperformed their best fit benchmark. And, the second three year period of the entire six years clearly demonstrates Howard’s surging “alpha skill trend” with a broader universe of 15,255 fund share classes having only 66.14% underperforming their best fit benchmark.

On the risk side, many funds have less risk than their benchmarks. This is to be expected since no benchmark includes cash and nearly every fund has some cash tempering at least a micronic amount of volatility with our precise measures. In fact, over the six year period, 54% of all funds had less standard deviation than their benchmark (kudos go to the 59.2% of the large blend funds that had less standard deviation for holding a little cash to manage redemptions…What skill!). It is interesting that the average standard deviation was 100.25% of the benchmark standard deviation and the median was 99.32%. This is not statistically meaningful.  But, as one would anticipate, all funds together, when classified appropriately against a broad universe of benchmarks, on average have about market risk.

Is there really anything surprising in this data? Not yet. Also not surprisingly, those funds that outperformed their benchmark had more risk relative to their benchmark, and those that underperformed had less risk. What a shocker here! Of the funds that outperformed their benchmark, they averaged 109% of the standard deviation of their benchmark and 59% of those funds had more standard deviation versus the 46% of all funds that had more risk. Those that underperformed averaged 96% of their benchmark’s standard deviation with 60% of those under performers having less risk. Those funds that did beat their benchmark had two and a half times more risk (26% versus 11%) of having a standard deviation greater than 115% of the benchmark standard deviation than the under performers (our criteria for a relative risk grade of “F”). Still no surprise here.

You might have skill. Or, you might just think you do. There is a big difference. You might assume that all excess results are caused by skill. You might assume luck is a figment of every winning gambler’s imagination. Winning gamblers also often falsely attribute their luck to skill, a betting system, or some other secret method, much like active advocates in money management.

I am fairly confident that skill exists, although I do not have very good evidence for it. For now, it remains an unprovable intuition. I suspect that such skill, should it exist, is somewhat rarer than the percentage of funds that happen to outperform because of luck, as I’m equally confident that luck also exists. The existence of luck is a lot easier to mathematically prove, but like Wermers’ work one cannot tell which funds were just lucky.

Beyond this, I have to think about the bet being made relative to odds that are knowable in my clients’ interests. I don’t know the odds of skill existing, how common it might be, or how much value can be obtained. To an objective scientist, the attempt to try to identify skill in historical data samples is not sufficiently provable when weighed against the odds that are knowable and how skilled one must be to make up for risks introduced of material under performance in an attempt to out perform. Marketers won’t sell this because real statistics are not marketable. However, your clients can benefit by keeping your level of skepticism high enough to avoid being fooled.

Go to page Previous, 1, 2

Display article as PDF for printing.

Would you like to send this article to a friend?

Remember, if you have a question or comment, send it to .


Contact Us
Website by the Boston Web Company