By now you are likely aware that Nate Silver of the New York Times correctly predicted the results for all 50 states (plus DC) in this year’s presidential election and all but two Senate races. Silver’s predictive capabilities across a range of disciplines have made him a near-deity among those whose livelihood depends on accurate forecasting – from poker players to counter-terrorism units. It’s clear why: His methods work – at least in some cases. And their strengths and limitations carry important lessons for financial advisors.
Already a lightning rod for controversy throughout this year’s election cycle, Silver has been making the rounds to publicly examine his methods – and raise questions about them going forward. His recent high profile owes in part to his election-day successes but also the corresponding increase in sales of his book: “The Signal and The Noise: Why So Many Predictions Fail – But Some Don’t.”
Published September 27th, the book surveys a number of recent failures and successes in forecasting across an array of fields: politics, the economy, baseball, and the weather, to name a few. Exploring statistical theory and its many applications in plain English, the book effectively renders its complex subject matter accessible to a broad audience.
The book’s core message is inclusive as well. Rather than simply presenting a new way of thinking, Silver implores his readers to think for themselves: challenge their assumptions, recognize opposing viewpoints, and reevaluate their beliefs as circumstances change.
Let’s look first at Silver’s overarching approach to data analysis and statistical predictions. We’ll then turn to how his methods apply to a decision that financial advisors face on a regular basis: selecting an actively-managed mutual fund that is likely to outperform its relevant benchmark.
Nate Silver: Lord of the Algorithm
Neatly encapsulating the accolades that Silver has received, Jon Stewart bestowed upon him the title of “Lord of the Algorithm” when Silver appeared on Stewart’s show the night after the election. But Silver’s book contains more philosophy than mathematics. His thinking is grounded in Bayesian statistics, which emphasize the importance of prior beliefs when weighing evidence for or against a hypothesis. He primarily blames predictive failings on our culture’s emphasis on certainty and widespread unwillingness to be honest about our biases.
Though he has gained mainstream recognition as a political prognosticator, Silver’s philosophy first took root in very different soil: the study of baseball. He first glimpsed the power of statistical analysis while developing PECOTA, an algorithmic system for predicting player performance. By recognizing his own dismissive attitude towards scouts and their subjective evaluations of players’ skills, Silver was able to create a more accurate statistical model that accounted for such personal analysis.
Silver’s political prediction blog, fivethirtyeight.com, was born from the ashes of his fascination with poker, which ended abruptly in 2006 when the 109th congress enacted laws that crippled the then-burgeoning online poker industry. Feeling lucky to have gotten out of the game when he did, Silver in his book examines his failings as a gambler and admits to being ignorant of them while he was still playing. Self-evaluation, which Silver emphasizes as necessary to prevent overconfidence, is central to his predictive philosophy.
Four principles
In spite of his “Lord of the Algorithm” title, Silver would be the last to put the algorithm – his or any others – on a pedestal. In fact, he spends many passages in the book cautioning would-be predictors, and consumers of predictions, against believing the predictive results of mere algorithms.
Silver’s message can be boiled down to four main principles:
-
Don’t predict based on data that is unsupported by theory or real-world insights.
Silver complains that in these days of Big Data, a common refrain is: “Who needs theory when you have so much information?”
He quotes one forecaster, climate-change skeptic Scott Armstrong, who says, “I actually try not to learn a lot about climate change. I am the forecasting guy” – as if forecasting could be done properly with data alone, without reference to a theory. Silver also cites an economic forecasting firm that brags about how it looks at 400 economic variables. But Silver objects that, “If you just look at the economy as a series of variables and equations without any underlying structure, you are almost certain to mistake noise for a signal and may delude yourself.”
On the contrary, Silver believes the purpose of algorithmic modeling is “modeling for insight” – he’s not interested in the results of the model for their own sake, but rather because they provide insights that can then be used, with a heavy overlay of common sense, intuition, and non-quantitative analysis, to make predictions.
-
Don’t believe predictors who are overconfident
Silver invokes Wharton professor Philip Tetlock’s distinction between foxes and hedgehogs. (Tetlock’s thesis was discussed previously in Advisor Perspectives.) A hedgehog, in Tetlock’s analogy, knows “one big thing” while a fox knows “many little things.” Hedgehogs tend to be overconfident and make bold predictions – they have one big idea that biases their forecasts. Foxes know “many little things” and are much less confident in their predictions – but Tetlock’s studies show they are more often right.
Unfortunately there are many incentives to be a hedgehog and to make overconfident and bold predictions. “In a society accustomed to overconfident forecasters who mistake the confidence they express in a forecast for its veracity,” says Silver, “expressions of uncertainty are not seen as a winning strategy.” Furthermore, he says, “Even if you know that the forecast is dodgy, it might be rational for you to go after the big score.” There is often more upside in confidently making a bold prediction that turns out to have been right after the fact, in other words, than there is downside in having been wrong.
-
Don’t neglect possible scenarios that are out-of-sample or difficult to categorize
Silver argues that “when we have trouble categorizing something, we’ll often overlook or misjudge it.” Out-of-sample events – those that haven’t occurred in the data set we have at hand – are difficult to categorize. He believes that the 9/11 terrorist plot was not predicted, even though there were strong signals beforehand, because it “was less a hypothesis that we evaluated and rejected as being unlikely – and more one that we had failed to really consider in the first place.” The failure to predict a national drop in housing prices and its consequences prior to the 2008 financial crisis was similarly an example of ignoring out-of-sample events.
-
The Bayesian method of admitting biases, then correcting them through observation, is much better than the current statistical orthodoxy, which emphasizes a rigid posture of objectivity
Silver doesn’t argue that the Bayesian method is superior algorithmically, just that it is more likely to nourish the proper predictive mind-set. A recent paper has argued that if you use the “perfectly objective” statistical method of hypothesis-testing that is widely accepted today, you are bound to encounter ethical dilemmas. For example, with orthodox statistics, say you decide that your hypothesis will be tested using 20 subjects. You wind up not quite proving anything at the significance level required, but you can sense that there is something going on in the data. An orthodox view says you are not allowed to extend the experiment – you must say that you have not found evidence for your hypothesis. To do anything else would be considered a violation of objectivity. Silver thinks your predictions will be better if you’re not boxed in by such contorted rules meant to preserve objectivity.
Information overload
Silver’s book begins with a brief history of mankind’s quest to share information, from the printing press to the dawn of the Internet, to the current moment, which he calls “The Era of Big Data.” He warns of the dangers inherent “whenever information growth outpaces our understanding of how to process it.” The increase in data has led to a corresponding increase in analysis, both good and bad, that, thanks to the Internet, spreads quickly regardless of its quality.
Silver explains how human evolution has left us with a biological instinct to over-interpret data: we too hastily detect patterns even when there are none and, “unless we work actively to become aware of the biases we introduce, the returns to additional information may be minimal — or diminishing.”
Failing to learn
Silver reserves his harshest criticism for political and economic forecasters, who he believes exploit a “gap between how well these forecasts actually do and how well they are perceived to do.” He is frustrated by how common forecasts unaccompanied by their margin of error or supporting statistics are in these fields. This fosters a tendency, Silver says, to “tell a story about the world as we would like it to be, not how it really is.”
Examining the recent financial crisis, Silver presents evidence that people at all levels, especially the rating agencies, showed “an extraordinary capacity to ignore risks that threaten their livelihood, as though this will make them go away.” Overall, in both politics and economics, he sees far too many incentives, both financial and otherwise, for commentators to be confident rather than correct.
Learning to fail
Throughout the book, Silver pleads the case for thinking and theories that constantly evolve, quoting Keynes’ famous statement: “When the facts change, I change my mind.” Silver challenges readers to accept that they are not as smart as they think they are and that simply consuming more raw data will not change that. He writes:
This book is less about what we know than about the difference between what we know and what we think we know. And it recommends a strategy for narrowing that gap, one which requires one giant leap and then some small steps forward. The leap is into the Bayesian way of thinking about prediction and probability.
The Bayesian way of thinking isn’t new – in fact, it dates back hundreds of years – but it has been under attack in the last century, and it has only recently experienced a resurgence.
The Bayes legacy
Silver’s critics often find fault in his use of Bayesian statistics, and many are proponents of the competing frequentist theory. Popularized by Ronald Fisher in the 1920’s, the frequentist method relies on pitting a null hypothesis, the generally accepted default position, against an alternative, then collecting data until they form a sufficient sample size to prove which view is correct. This approach strives to divorce a study from the biases of the person conducting it.
Silver counters that, by “striving for immaculate statistical procedures,” researchers inadvertently find themselves “hermetically sealed off from the real world.” Using Bob Voulgaris, the sports bettor, as an example, he shows how a forecaster can use biases found in their previous forecasts to improve the accuracy of future forecasts.
Frequentists continue to argue that personal bias has no place in objective statistical analysis and that, with enough data, their approach yields truthful results. Bayesians, on the other hand, believe the truth lies somewhere between our converging biases, and recognizing those biases is necessary in order to find it.
Fisher’s theory continues to be a widely accepted standard, although Silver points out that some statisticians have begun to argue that frequentist statistics should no longer be taught to undergraduates. He accepts that it will take some time for this change in theories to occur, but he maintains that the Bayesian approach will prevail, in part because the Bayesian view holds that eventually we’ll converge upon the better approach.
Or, as Silver puts it, “Bayes’s theorem predicts that the Bayesians will win.”
Think probabilistically
Woven between the graphs and formulas of Silver’s book is the overriding imperative that we all need to “think probabilistically” and take the time to better understand the statistics that are presented to us every day. He urges us to keep an open mind and shed our tendency “to think we are better at prediction than we really are.”
Silver’s advocacy of the Bayesian method can be questioned or challenged. After all, it assigns weight to pre-existing biases; even Silver mentions that Billy Beane, the hero of Moneyball, who used statistics to find winning baseball players, “avoids what he calls ‘gut-feel’ decisions. If he relies too heavily on his first impressions, he’ll let potentially valuable prospects slip through the cracks.” Isn’t that an argument for keeping subjective estimates out of statistical reasoning? Perhaps not, if they are scrutinized and updated intelligently.
But everyone – including Silver – has biases that can ultimately get in the way of the correct conclusion, whether your philosophy is Bayesian or frequentist. For example, Silver says that frequentist methods “neither require nor encourage us to think about the plausibility of our hypothesis: the idea that cigarettes cause lung cancer competes on a level playing field with the idea that toads predict earthquakes.” Well, really, if we didn’t know already that years and years of statistical studies provided insuperable evidence that cigarettes cause cancer, why would we so cavalierly jettison the notion that toads predict earthquakes but keep the proposition that cigarettes cause cancer? It has been reasonably conjectured from anecdotal evidence that toads and moles and other creatures that are close to the earth can predict earthquakes because they sense the escaping gases that precede them. And, before medical studies to investigate the correlation between cigarettes in cancer, it was possible to suspect that the link might be spurious – think of all the other unhealthy influences that might correlate with smoking habits. There’s no real way to know a priori which hypothesis is the more plausible.
Here’s another beef with Silver: He says that “members of Congress, who often gain access to inside information about a company while they are lobbied and who also have some ability to influence the fate of companies through legislation, return a profit on their investments that beats market averages by 5 to 10 percent per year.” But this is one case where Silver has not questioned his information adequately, as an article in this publication has shown.
Could Silver find the next great mutual fund?
Chapter 11, in which Silver focuses his expert attention on whether you can pick a winning mutual fund, may be of particular interest to advisors – although (spoiler alert!) Silver obtains the same result that almost every other researcher has over several decades. He looked at mutual funds’ performance from 2002 through 2006 and compared it with performance from 2007 through 2011, finding that “there was literally no correlation between them… there was just no consistency in how well a fund did, even over five-year increments.” His conclusion is that “you’re best off just selecting the one with the cheapest fees — or eschewing them entirely and investing in the market yourself.” If any more were needed, this provides further confirmation – from no less an authority than the “Lord of the Algorithm!” – that statistical methods are unlikely to work for finding a good active manager.
Climate change
In a particularly effective chapter, Silver applies the same enlightened statistician’s approach that he uses for other topics to the subject of climate change. He evaluates whether predictions made 25 years ago of a warming trend were accurate, and finds that – in spite of criticisms that have been leveled against them – they hold up reasonably well.
In fact, the evidence that global warming is occurring is much like the evidence that equity investments rise in value in the long run. In both cases, because of noise in the data, short-term windows can show either an increase or a decrease; but, over time, the long-term trend becomes clear. Furthermore, in both cases there is cogent theory underpinning the expected rise. Although Silver believes it is important to examine the data to check out theories that could be hare-brained, he also believes – conversely – that you can’t read facts from data unless you have a sensible explanation for those facts.
You do not have to accept Bayesian statistics in order to appreciate Silver’s book. In fact, Silver encourages you to read with “healthy skepticism.” But after Silver’s election night success, it's probably a relatively safe prediction that Nate Silver and his algorithms are likely to remain in the national discourse for some time – and its worth learning more about what they can and can’t tell us.
Ben Huebscher is a Los Angeles-based freelance writer and the son of Advisor Perspectives’ founder and CEO, Robert Huebscher.
Michael Edesess is an accomplished mathematician and economist with experience in the investment, energy, environment and sustainable development fields. He is a senior research fellow with the Centre for Systems Informatics Engineering at City University of Hong Kong and a project consultant at the Fung Global Institute, as well as a partner and chief investment officer of Denver-based Fair Advisors. In 2007, he authored a book about the investment services industry titled The Big Investment Lie, published by Berrett-Koehler.
Read more articles by Ben Huebscher and Michael Edesess