Watching experts predict the future is like watching professional wrestling. You assume everybody knows it’s a put-up job but can’t resist it anyway. Then you discover that most people don’t even know it’s a put-up job in the first place.
Dan Gardner’s book Future Babble takes the sport of expert prediction apart piece by piece, showing why it’s phony, why people still pay close attention to it and why people (including the experts themselves) continue to believe in it. Along the way, the author — who is a real “fox,” as I’ll explain later — fills his pages with enough interesting information and anecdotes to keep us reading with pleasure.
The basic data
Gardner’s book is anchored in the academic work of Philip Tetlock, a professor of management and psychology at Wharton. Beginning in 1984 and continuing for at least 20 years, Tetlock performed an exhaustive long-term controlled study of expert predictions. He collected 27,450 well-defined expert predictions, along with the experts’ subjective estimates of their probabilities of occurring. According to Gardner, Tetlock’s experts were very diverse – from different fields, political leanings, institutional affiliations and backgrounds – and they were asked “clear questions whose answers can later be shown to be indisputably true or false.”
Then Tetlock assessed whether the experts’ predictions performed any better than a dart-throwing chimpanzee. The result — perhaps not surprisingly — was that for the most part, they did not. But some experts’ predictions were better than others.
The measurement system
Tetlock used an elaborate measurement system to determine how good the predictions were. Here is a simple example:
Everybody knows — or assumes — that the chance of a tossed coin coming up heads or tails is 50%.
It seems random, but the result of a coin toss is completely determined by forces that obey the laws of physics: the strength and placement of the initial impulse that sent the coin skyward; the air currents in the space through which the coin travels; the coin’s contours and distribution of mass; the shape, hardness and frictional qualities of the surface on which it lands; and the interactions of all of these factors.
For most of us, these details and interactions are too complex to analyze. We can only shrug and say the probability is 50-50 that the coin will come up heads or tails.
But suppose we called in an expert — a renowned theoretical physicist with a Ph.D. and several major prizes and marks of distinction. Let’s say this expert really believes he can use his skills to better estimate the probability of the result of a coin-toss.
Suppose that on each toss of a coin, the expert either says there is a 60% chance the coin will come up heads or tails. If the expert is a perfect forecaster, then 60% of the time, it will come up as he predicts. The expert will get a perfect score of zero on what Tetlock calls calibration. This is because the difference will be zero between the average of his announced subjective probabilities, 60%, and the average of the objectively determined probabilities, also 60%.
Now suppose instead that this expert is all wet — he can’t predict any better than random. In that case, no matter what his prediction and the probability he assigns to it, 50% of the time the coin will come up heads. His calibration score is now 0.10 because the difference between the average of his subjective probabilities, 60%, and the actual objective probabilities, 50%, is 10% (that is, 0.10).
Under the latter circumstance, an expert who always makes the same predictions as a layman — 50% chance of heads, 50% chance of tails — will do much better. Such an expert, having a very modest sense of his predicting abilities, will get a perfect calibration score of zero, because his subjective probabilities will equal the objective probabilities.
This shows why some experts — those who are skeptical of their own, or anyone else’s, predicting abilities — tend to do better than those who think they really know. The skeptical ones also tend to have a broader range of knowledge and experience than the latter group, who have great depth in a narrow area. This is why Tetlock classes the two types of experts as “foxes” and “hedgehogs,” after the Isaiah Berlin essay, “The Hedgehog and the Fox,” in which occurs the famous line, “the fox knows many little things, but the hedgehog knows one big thing.”
Most of the experts who get a lot of predictions wrong are hedgehogs. They know — or think they know — one big thing. Gardner singles out several of these notable experts, especially ecologist Paul Ehrlich, who mistakenly predicted in his 1968 best-seller The Population Bomb that widespread global famine would kill hundreds of millions of people within 10 years. Hedgehogs also tend to have outsized confidence in their abilities to predict.
Successes on the front page, failures in the back
No one escapes unscathed from Gardner’s review. Peter Schiff, for example, who has been lionized for having insistently predicted the financial crisis, is shown to have made many other forecasts that were dead wrong — including predictions of crises that didn’t happen. Ehrlich’s nemesis, Julian Simon — the business professor who made a bet with Ehrlich and another scientist, John Holdren, that the prices of a list of scarce resources wouldn’t rise and overwhelmingly won the bet — also takes his hits.
Surprisingly, Jim Cramer isn’t mentioned, though he certainly could have been. His losing joust with Jon Stewart of The Daily Show over his stock forecasts is a classic of prediction deflation. Indeed, an exhaustive study of Cramer’s track record has shown that his stock picks do not generate alpha.
Why, after all, should we expect anyone to be able to predict stock prices? Lacking other information — as with a coin-toss — the chances of a stock outperforming its benchmark are 50-50. Whether it does so or not isn’t merely a matter of the interaction of forces obeying rigid physical laws; it’s the result of an even more complex interaction of economic and human forces that do not even approximately obey laws. Yet while we would be hard pressed to find a theoretical physicist claiming to be able to predict the result of a coin-toss, we can readily find thousands of people who claim to know whether a stock or stock portfolio will outperform its benchmark.
Why are there so many “expert” predictions? And why do so many people believe them? One reason, Gardner points out, is that predictions get attention. If they happen to succeed, they win attention and plaudits — even celebritization if the prediction is wildly successful. And there’s not much downside. If a recognized expert succeeds, he or she can make page one. If the expert fails, the story may make the back page.
John Kenneth Galbraith, quoted by Gardner, said that pundits “forecast not because they know, but because they are asked.” The public wants predictions. It’s a spectator sport.
Why predictions are so facile
It seems no book is complete these days without references to gee-whiz behavioral experiments exposing the irrationality of human beliefs and preferences. Future Babble is no exception. Gardner cites interesting psychological experiments to show that even expert predictions are frequently little more than numerology. Numerical predictions, for example, are often found to relate to the last number that was mentioned to the predictor. If a large number is mentioned, and then the predictor is asked to forecast the numerical value of some unrelated measure, he or she will predict a large number; if the first number mentioned was a small one, he or she will predict a small number.
Experiments like this suggest that when people make numerical predictions, they truly pick the numbers out of thin air. What’s more, the experiments show that the predictors actually believe the predictions, believe they have made them carefully and keep believing so after the predictions are proved wrong.
Gardner spends a considerable portion of the book on the fact that even when experts are confronted with evidence that their predictions were off base, they don’t really believe it. They find ways to convince themselves and their acolytes that their predictions were right and furthermore, that their more recent predictions are right in spite of their poor records. Gardner’s chief explanation is an extreme human discomfort with what psychologist Leon Festinger called “cognitive dissonance.” When the facts do not line up with people’s preconceptions, people often change their perception of the facts rather than their preconceptions.
This section of the book is based in good part on interviews with the experts whose predictions fared so badly, like Paul Ehrlich. Sure enough, these experts forget their forecasts that were starkly wrong and remember other forecasts as having been at least partially right.
Acceptance is our strongest weapon
Future Babble treats expert predictions with the attitude that “thus it shall ever be.” The media needs forecasters to help make the news, the public needs predictors to give them hope or an anchor for the future and experts need to believe they can predict to validate that they are experts. The fact that the whole thing is mostly a sham is almost irrelevant. So why don’t we just accept it?
Maybe if we accept it as part of the human condition, instead of occasionally waving our hands about in the air in disbelief that people could be so stupid as to believe these “experts,” we would wind up de-emphasizing the issue and — paradoxically — creating an atmosphere in which people are less adoring of expert predictors.
I remember a time when everyone I knew believed that surely we could put an end to war. War made no sense and was bad for everybody — if we just put our heads together, we could end it. Then, perhaps as part of the general shift toward “realism” or because of the realization that peoples’ interests can be fundamentally at odds, my peers began to believe that international wars would always be inevitable.
Well, what do you know? Soon after they started believing that, brutal international wars began to become outmoded. It’s not well recognized, but since 1975, the number of lives lost in international combat has plummeted precipitously. For 60 years before that — and probably for most of human history — mortal cross-border conflict was the norm.
Silly predictions and international wars are not comparable. But maybe the best response to solemn predictors — whether they are a bit puckish like Jim Cramer or intensely serious like Paul Ehrlich — is to watch, enjoy the fun and laugh them off.
Michael Edesess is an accomplished mathematician and economist with experience in the investment, energy, environment and sustainable development fields. He is a Visiting Fellow at the Hong Kong Advanced Institute for Cross-Disciplinary Studies as well as a partner and chief investment officer of Denver-based Fair Advisors. In 2007, he authored a book about the investment services industry titled The Big Investment Lie, published by Berrett-Koehler.
Read more articles by Michael Edesess