Complexity Wins Again
Assumptions and Guesswork
“The Tiniest Inconsistency”
The Bias in Models
AI to the Rescue?
Puerto Rico and Dallas
Here in Puerto Rico we are now an hour ahead of Eastern Time, as we don’t do daylight savings time. I stayed up much later than normal on election night to watch the returns. I knew fairly early, when Florida and North Carolina looked so close, we weren’t going to see a “blue wave.” But beyond that, it was clear the presidency would not be settled that night.
We did learn one thing with high confidence, though. The political polls were seriously, dreadfully wrong. Biden’s comfortable lead both nationally and in swing states bore very little resemblance to actual voting. This follows a similar miss in 2016.
It will take time to nail down the precise error, but we will be told, “Something was different this time.” Well, of course. “Something” is always different. The polling industry’s conceit is that it knows how to collect meaningful samples and then adjust them to resemble the pollsters’ assumed reality. The assumption can be wrong, and often is.
Understand, pollsters are paid to try to get things right. The true professional pollsters try to get it right and most still missed it.
In that regard, the political polling profession resembles the economics profession, which believes its elaborate models can reveal the future. Both fail because they try to measure incredibly complex systems that change in ever-changing ways. Similarly, professional epidemiologists try to create virus models. They’ve done better than political pollsters and economists, but they also must make assumptions.
Today I want to use the election to illustrate just how complex a seemingly simple situation can be. This matters not just politically, but economically as well.
The first challenge is our complex method for electing a president. We don’t have a national election. Each state gets a certain number of Electoral College votes, which go (all or nothing) to the winner of that state’s popular vote (with a couple of exceptions I won’t discuss).
This means the national popular vote, while interesting, has no bearing on who becomes president. Pollsters still measure it, though, because people are curious and it’s relatively straightforward.
Except, it’s not straightforward at all. You can ask people how they will vote, and they may tell you, but it won’t matter unless they actually vote. Pollsters try to control for this with “likely voter” models. Sometimes they assume you will vote this time if you voted last time (which they can confirm from public records). Or they may ask questions to gauge your intent. Regardless, these models are still guesswork.
This year brought even more complexity due to the coronavirus. Will you still vote if you believe it will risk your health and possibly your life? Maybe, maybe not. Not everyone thinks that risk is significant, and these attitudes aren’t equally or predictably distributed.
State officials introduced yet more complexity with new voting methods, expanded mail-in voting, etc. These vary by state, and in some cases changed at the last minute. No one knew, or could know, how this would affect turnout. The fact that we have the largest turnout since 1900 (percentage wise), with a significant number of first-time voters, skewed the polls even more.
We all now know about Florida. Biden was on average +7 yet lost by more than three percentage points. If you bothered to look at Susan Collins in Maine you get another skewed example. She was basically down in every poll anywhere from a few points to 10+. I remember meeting Susan Collins for the first time three years ago on my annual fishing trip in Maine, along with 50 other economists/investment analysts, investment writers, etc. I’ve met more than a few politicians in my lifetime. I have never met a better retail politician, one-on-one, than Susan Collins. She just gets into your space and you like it. You end up wanting to help her. And that likability, plus her ability to strategically vote against her own party, will apparently keep her in the Senate six more years.
So even if we did have a single national election, measuring it was going to be tougher than ever this year. But in fact, you have to multiply this complexity 50+ times to account for the state-level silos in which the elections are held.
Then there’s the geographic element. Pollsters try to measure choice and turnout by area, both because it’s important to local races and because the electoral college system forces them to. It doesn’t really matter if a candidate wins a state by 50.1% or 75%. They get the same number of electoral votes. So high turnout in one state can’t offset low turnout somewhere else. You have to estimate it separately everywhere.
But pollsters have a more basic problem. Before any of the above matters, they need people to answer the phone.
As you are no doubt aware, technology has changed the way we communicate. Does your home still have a landline phone? (I haven’t, for well over a decade.) If so, do you answer it? Particularly in campaign season when you get so many robocalls? If you have an iPhone, have you activated the “Silence Unknown Callers” feature? Mine now happily tells me about spam risk, likely sales calls, and so forth.
These are serious problems for pollsters. They need to reach a certain number of people, and it’s getting harder. I saw an interesting article in The Sydney Morning Herald this week. It’s by an American political scientist who teaches in Australia.
In the age of the mobile phone, very few people answer calls from unlisted numbers, and even fewer want to talk to a pollster—who, for all they know, may be a fraudster in disguise. The Pew Research Centre reports that its response rates have plummeted from 36 per cent two decades ago to just 6 per cent now. And Pew is a not-for-profit outfit that doggedly attempts to contact every sampled phone number at least seven times. Commercial polling firms don't have that luxury.
No major commercial polling company is brave enough to reveal its response rate. Rumors are that they're down to about 3 per cent. That's a very thin foundation on which to predict a presidential election. The tiniest inconsistency between the characteristics of that 3 per cent and those of the electorate as a whole could invalidate the entire industry.
The pollsters do their heroic best to model the likely behavior of the masses from the self-reports of a few phone-answerers, but all such models are approximations. They inevitably introduce error. Model error may be even bigger than the sampling error that goes into calculating the "error margins" that are often reported alongside polling data. Or it may not be. No one knows but the pollsters, and they're not saying.
He also talks about “social desirability bias,” which is basically a reluctance to reveal your vote choice to a stranger. We heard stories before the election of “shy Trump voters.” I imagine there were also shy Biden voters. It’s understandable in a nation so polarized and bitter that expressing political opinions can cost you friends or even your job. But it makes accurate polling even more difficult.
But zero in on this line: “The tiniest inconsistency between the characteristics of that 3 per cent and those of the electorate as a whole could invalidate the entire industry.” We have now seen in successive elections inconsistencies well beyond tiny. What other industry could survive such failures?
I can think of at least one.
If measuring voters is complex, measuring the economy is even more so. Think of all the moving parts just in the US. Millions of companies, hundreds of millions of workers and consumers, buying and selling billions of different goods and services under sharply different conditions in different places, and all of this subject to change at any time.
Just think of jobs data. How many Americans are unemployed? Certainly it is a big number. We who are fortunately still employed all have jobless friends and family. But is it 10 million, 20 million, 50 million? Over 21 million people are still claiming some type of weekly unemployment insurance as we go into the weekend.
The numbers we have come from surveys, not unlike the political surveys and with similar limitations. When a stranger calls you on the phone to inquire about your job status, will you take the call? And if you do, will you answer honestly? And even if you do, exactly what does it mean to be “unemployed” now? Maybe you lost your full-time job but you spent a few hours last week helping someone move. You made $100 and you will count as “employed” but your situation is not remotely like it was when you went to an office every day.
Apply that same level of complexity to all the other economic numbers: trade flows, retail sales, savings rates, manufacturing output, real estate, bank lending, and everything else. Much of it is questionable at best. Yet economists still include it in models which are themselves filled with assumptions about the relationships between various inputs. They show their models to government leaders, CEOs, and central bankers, who then use them to make important decisions that affect you and me.
Is this good? That’s also unclear. I’ve told the story of the World War II weather forecaster (an officer named Kenneth Arrow, who later became one of the most famous Nobel laureates of the last century) who knew his forecasts were error-prone, and worried the generals would rely too much on them. But the generals knew this. They demanded forecasts anyway. Why? Because they needed something, even if it was wrong.
On that point, I have a little sympathy. Every writer knows that “blank page” feeling. Getting started is the hardest part. I may end up deleting that first paragraph I struggled to write, but it was still useful. I suppose these models have similar value to decision-makers.
On the other hand, there are limits. If the weather forecast said partly cloudy and you got a thunderstorm instead, it may ruin your day. Oh well. But it matters a lot more when you expect partly cloudy and you get a Cat-5 hurricane. It will ruin more than your day. Thankfully, our weather forecasting methodology is improving, if not perfect. Modern technology lets us see the hurricane coming and prepare. (Although this year, to be honest, Shane and I went through the entire hurricane preparation ritual, just to have a hurricane veer off at the last minute right before it got to Puerto Rico. We didn’t complain.)
The real problem, with both political polls and economic models, is when users rely too much on them. They give us an (often false) feeling that we know the future, which gives us comfort. We can see the margin of error footnote in the polls, we can hear the pollsters’ warnings and caveats, but on some level we want to believe. It’s human nature that’s hard to avoid. And it is especially hard when those models tell us something that we want to believe. It is called confirmation bias, and is one of the most difficult of all the emotional baggage that we bring to our investment decisions.
The word “presuppositionalism” typically refers to a particular theology, but I use it in a broader context. We all start out from a beginning point in our thinking. We believe our eyes see the real world. We believe some of what we read and what we hear from friends. These shape our thought patterns and what we presuppose to be true. Without these presuppositions it would be extremely difficult to communicate with other people. (As a matter of personal discipline, I constantly question my own presuppositions. Sometimes I change them.)
Here’s the problem. Every person who creates a model does so with specific presuppositions in his or her head. You can try to get those presuppositions out of your models, but it is very difficult.
The Bias in Models
Let’s first talk about mainstream economic models from large government and major investment companies. I talked about a CBO model a few weeks ago and have used and abused it over the years. First, they never predict a recession. But it is not just the CBO. I wrote this some seven years ago and nothing has changed since.
In one of the broadest studies of whether economists can predict recessions and financial crises, Prakash Loungani of the International Monetary Fund wrote very starkly, "The record of failure to predict recessions is virtually unblemished." He found this to be true not only for official organizations like the IMF, the World Bank, and government agencies but for private forecasters as well. They're all terrible. Loungani concluded that the "inability to predict recessions is a ubiquitous feature of growth forecasts." Most economists were not even able to recognize recessions once they had already started.
In plain English, economists don't have a clue about the future.
Take the record of Wall Street strategists. The clear average of blue-chip economists always predicts a positive year for the S&P 500. Federal Reserve economists are like 0 for 300 on their predictions about the direction of the economy and only slightly better on interest rates. The record has improved somewhat since they now plan not to raise rates for anything, even if we have inflation. That makes predictions a great deal easier.
This problem with models and predictions may be personal, too. Your retirement likely depends on some kind of model. It tells your financial planner how much you can withdraw from your savings without running out too soon. But that advice depends on questionable presuppositions like “stocks always go up over time.”
My friend Ed Easterling at Crestmont Research notes there have been numerous 20-year periods where stock market returns were below zero, especially when taking into account inflation. Ed’s website has one of the best data treasure troves anywhere:
A number of advocates and studies provide for 5% withdrawal rates: “I only want $50,000 from my million dollars” and have it last for 30 years. The calculated success rate for that rate of withdrawal is 73%. Pretty good odds…except when we consider the impact of valuation.
“SWR” stands for Safe Withdrawal Rate, and the safe amount varies considerably depending on market valuations when you start. The table shows that if you are in the top 25% of valuations and each year withdraw 5% of your $1 million retirement savings (to generate $50,000 for living expenses), you would run out of money 53% of the time. On average you have less than 21 years of retirement then run out of money. Good luck if you are still alive and kicking.
With valuations in the top 10%, like they are today? It is even worse.
If your financial planner says you can take out 5% per year “safely” based on a 60/40 (stocks to bonds) portfolio, then you should take your papers and walk out the door.
Furthermore, many planners use a total return model which starts in the 1920s and shows that over time markets will give you an 8 to 9% return. They simply (and lazily) plug in that 8–9% number for each and every future year, assuming that time will take away the effects of a bear market and recession. And that is probably true if you have 80 to 90 years. If, however, you are retiring when the markets are at a very high valuation, like now, your model will likely give you really bad advice.
Pension funds are going to get devastated in this decade. So are many retirees. And it all comes from bad models on top of more bad models. It’s a big problem. But maybe technology has a solution.
AI to the Rescue?
Last week I mentioned I had been thinking a lot about the artificial intelligence field. This election gave me even more food for thought. The latest AI systems, paired with powerful supercomputers (and soon quantum computers) can process massive data sets that are incomprehensible to humans. Complexity doesn’t scare them. They can dig in and make sense of it.
Now, combine that thought with our political polling and economic modelling challenges. Could AI be the answer? Can the machines process these complex data sets well enough to make them not just a starting point, but useful and accurate? AI will contribute to radical changes in the way we make important decisions. This may be the biggest technology trend of our lifetimes.
I’m happy to announce we have a new Mauldin Economics video on the things every investor should know about AI: how it works, what the main subsectors are, its huge economic impact, and the profit opportunities it offers. We’ll unveil it online on November 9 at 2:00 pm EST. It’s free if you register in advance here. Registrants will be able to watch a replay later, too. I hope you’ll join me in learning more about AI.
Puerto Rico and Dallas
This has been a most unusual year for me, with less travel than I can remember for the last 40 years. I will, however, go to Dallas for a “small” Thanksgiving dinner with my kids and family, not the usual invitation for several dozen extra guests. I will get there early enough to go grocery shopping and cook, and hopefully meet some friends. There might be one other trip in my future before the end of the year, but I’m doing my best to get them to come to me.
As a side note, there is so much opportunity here in Puerto Rico it is hard to get your head around it. And we have a new government which has replaced a lot of the cronyism and corruption. It’s kind of like the beginning of baseball season. Everyone has high hopes.
And with that I will hit the send button. I wish you a great week as we all hope that we can finally know how the elections turned out, up and down the ballots. The surprise to me was how many House seats the Republicans were able to take. I was expecting, as were many, that the party would lose seats. Many are still in play. Certainly not enough to take control, but I find it interesting in an inside baseball kind of way.
Your as skeptical as ever of models analyst,
© Mauldin Economics