Sell in September? Time for a Reality Check!

Advisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.

Anyone paying attention to market news must know that September is the weakest month for stock market performance.  Mark Hulbert, one of our favorite writers, calls it The Cruelest Month.  He summarizes the table of returns as follows:

Notice from the table that in all but one of the last 11 decades, September was a below-average performer. In more than half the decades, in fact, the month's rank was dead last.

Why, given such an overwhelming record, would anyone question September's bad record? Because there is no good theory for why the month should be such an awful month for the stock market. And, without such an explanation, there's the distinct possibility that the statistical pattern is just a fluke.

Data mining dangers

Any time we have lots of data there will be some apparently strong relationships that occur simply by chance.  My favorite example comes from an “expert” on state lotteries.  She wrote, "In the Pennsylvania Lottery, certain numbers like to come up with their near neighbors."

Despite how foolish this is, our article on state lotteries shows that most people believe the claim.

It is a great example of how people try to ascribe patterns to purely random data, pre-dating Nassim Nicholas Taleb’s Fooled by Randomness, a book which we send as a gift to many clients.

Hulbert is well aware of the dangers of data mining.   By way of illustration, he mentions the tale of an economist who sought to identify the variable with the strongest correlation to the S&P 500.  By analyzing the economic data series available on a United Nations CD-ROM, he found that butter production in Bangladesh was most closely correlated with equity market returns.

Hulbert considers a number of hypotheses to explain the September effect and invites readers to share their own ideas.  His thought is that a good explanation would make it harder to attribute September performance to a random statistical fluke, although he shares the information that day traders do not wait for any hypothesis testing! 

Taking data at face value seems plausible to most people.  Those with expertise in statistics would disagree.

Hulbert's Market Watch colleague, Irwin Kellner, has a number of reasons for September weakness, including the possibility of a self-fulfilling prophecy –investors sell in September in fear of a repeat of historical declines..

What both articles miss

As a veteran debunker of mythical market lore, I have seen this many times.  Traders and investors alike just want to see the data --- all of it!  The more data the better.  The mistake comes from starting with data instead of with a hypothesis.  The inevitability of random results is lost on most, including the "day traders" Hulbert mentions. 

Both Hulbert and Kellner think that post-facto identification of a hypothesis makes the inference more plausible.  It does not.

They miss the basic reality.  There are twelve months.  There will be a distribution of returns.  Some months will be good and others will be bad.  Always.

What if there seems to be a reason?  A promising hypothesis does not help when formed after you already know the outcome. Any smart person can invent a compelling reason.  In this real-life example, a group of very intelligent first-year grad students at a big-time university tried to satisfy their professor:

The professor led the seminar by introducing a series of findings drawn from social science literature.  These were relationships like voting patterns of black males, party identification of former military personnel, and the like.

As he introduced each finding, the professor invited the students to comment, suggesting hypotheses to explain the results.  Straining to please, the students had many imaginative suggestions.  Their ideas would have filled out many journal articles.  They were showing off, and happy to do so.  The professor provided some positive feedback for the thoughtful analyses, and ticked off a dozen or so propositions.

At the end of the seminar, the students sat back, satisfied with their performance.  The professor congratulated them on their creativity and imagination, and everyone sat up a little taller.

Then the prof dropped the bombshell:

The actual findings were all EXACTLY THE OPPOSITE of what he had stated!

The next day's assignment was to come back with new hypotheses for the actual findings.

The lesson we learned is a lesson for everyone.

The scientific method works only when one begins with the hypothesis.

Read more articles by Jeffrey Miller