Performance Measurement: Danger of Point-in-Time Analysis

I hear it on TV, I see it in ads and I get the sales pitch. “Fund XYZ is a 5-star fund according to Morningstar,” or “Fund XYZ is in the top quartile of its peer group for the trailing one- and three years.” It sounds impressive. But, should we listen? No, not really. Should you invest in a fund based on this data? Definitely, not. Looking at these statements through a critical lens, they’re little more than a means to pique investor interest in a product. And mutual fund investors love to chase short term performance, so these types of statements tend to be fairly effective. Who doesn’t want to invest with a “winner?” The implication being, if a fund has performed well, then it must be a good investment! Before I get too carried away, I would like to point out it’s not that the statements are bad, it’s that, in my opinion, the statements encourage bad investor behavior — chasing strong short term performance.

To start, the timeframe the statement is based on, whether it is the past one, three or five-year period is an arbitrary timeframe. It doesn’t provide any real meaningful or useful data, outside the fact this particular manager performed well at some point in the past. It’s historical data. And it provides no meaningful indication of future performance. In fact, the strong short term results (trailing one, three and five years) may be more of a negative than a positive, especially for investors tempted to jump in immediately after a period of strong performance.

This brings me to the topic of today’s post: the pitfalls of traditional performance analysis, or what I refer to as “point-in-time analysis.” I’ll discuss why I believe point-in-time analysis is misleading or at a minimum, not very useful, especially when examining the performance of active investment managers. Then, I will recommend a few alternative methods for performance measurement and why I believe these methods are superior.

Performance Measurement – Trailing Periods
Point-in-time analysis is essentially the measurement of manager performance from an arbitrarily selected endpoint (usually the most recent month, quarter or year-end) over trailing periods of time. Typically, timeframes are one, three and five-years. Point-in-time analysis tends to be the most common way to report the historical performance of a manager. Unfortunately, I think it’s one of the worst ways to evaluate performance. For those who read my previous posts on active vs. passive, this example will be familiar. But, it’s still worth revisiting.

In the table below, there are four rows of data shown with the decile rank of each return stream over the trailing one, three and five-year periods. To clarify, the returns are ranked based on how the given manager stacks up relative to a group of managers that follow a similar investment strategy (otherwise known as a peer group). A decile rank of one is the best ranking and means the manager generated a return within the top 10% of managers in its peer group for a given time period. A decile rank of 10 is the worst.

With that in mind, which manager appears to be the most attractive in the table below?

Most individuals would select the fourth row of data (circled in green) because that return stream ranks in the top decile (or top 10%) of managers in the peer group over a trailing one, three and five-year timeframe. At the same time, there would probably be very few individuals that select the return stream represented in the second row (circled in red) because those returns rank in the 9th, 7th and 7th deciles over the trailing one, three and five-year timeframes, respectively.

However, they’re actually the same manager! The only difference between the four rows of data is the arbitrary selection of an endpoint. So, what happened? Why are there such large disparities in peer group performance rank when looking at the trailing one, three and five-years of data from one year to the next? You’ll notice in the first line of the table below a very good year of relative performance from 2002 (circled in green) was dropped and a poor year of relative performance in 2007 (circled in red) was added. That single change resulted in a dramatic change in the manager’s ranking relative to peers.

This example shows the impact of an endpoint has on trailing period returns. The reason for this comes back to simple math. Recent performance (one year in this example) contributes significantly in the calculation of shorter-term trailing performance periods. For instance, the last one year of performance obviously accounts for 100% of the return for the trailing one-year period, approximately 33 1/3% of the return for the trailing three-year period and 20% of the return for the trailing five-year period. This is one of the major reasons why trailing period performance measurement (i.e. point-in-time analysis) is a poor way of looking at active manager performance. Short term results dominate shorter-term trailing return periods.

Performance Measurement – Alternative Method #1
The first alternative to point-in-time analysis is calendar year performance. While not perfect, looking at calendar year performance helps “tell a better story” of manager results over time. For instance,

· Has long-term performance been driven by only one or two years of strong or weak performance?

· Did the manager perform better than peers in up markets or down markets?

· If there were any large performance deviations in any single year, what were the causes?

Essentially, shorter, stand-alone periods help provide a better understanding of how a manager performs throughout a market cycle and assists in developing a basis for further analysis.

Performance Measurement – Alternative Method #2
The second alternative is rolling period performance relative to a peer group. Rolling periods represent moving windows of performance. In the charts below, performance rank of managers in the universe are broken into “quartiles,” or 25% increments, as represented by the four shades of the background in each chart. The top quartile represents the best performing managers. Each end point represents the performance rank of the manager for the time period shown at the top of the graph. For instance, in the bottom chart, each point on the line represents where the trailing seven-year performance (given the end date on the bottom axis) of the given manager ranks in relation to all managers in its peer group or universe. The light blue shading represents the “top” or best performing quartile of managers for that one timeframe.

Using point-in-time analysis instead of the chart below would only give you one return period (one return and one rank for that specific period). However, with rolling one-year return ranks, we get to look at over 150 data points (represented by the line). Thus, a benefit of rolling periods is it helps eliminate the short-sighted nature of evaluating managers based on results from an arbitrary endpoint over a trailing period.

Performance Measurement – Alternative Method #3
The third and last method is evaluating performance over specific periods of time. In this example, we look at how managers have performed in various market environments such as bull market rallies, stock market corrections and over a full market cycle. Performances of a manager during these periods serve as a basis for further analysis and future expectations.

In the table below, note all four managers ranked in the top one or two deciles of their respective peer group over the period labeled “rally #1.” Subsequently, three of the four managers performed relatively poorly compared to peers in the period labeled “correction #1.” The managers’ performance rank relative to peers during the correction is not necessarily surprising given the strong relative results during the rally. In the third period, labeled “rally #2,” three of the managers performed relatively well, while one manager performed relatively poorly. Given manager #3’s relatively strong performance in the first rally, the weak results in the second rally may raise a red-flag for further analysis.

Lastly, since we believe in long-term investing and its benefits in evaluating managers over a complete market-cycle, we look at results over the entire period, which encompasses both the up and the down-market periods in the previous columns.

The takeaway here is despite shorter-term ups and downs, all four managers significantly outperformed over the entire period. This helps provide support for a long-term commitment to an investment.

Conclusion
Performance measurement can be helpful as one small piece in the overall evaluation of an active manager. However, performance needs to be measured in a variety of ways, so it becomes more than just a number, a rank or a selling point.

Charles J. Batchelor, CFA, is Director of Investment Research for Cleary Gull

© Cleary Gull