An allegation has been floated recently that inflation has been exacerbated by corporate greed. A neologism has even been coined for it, “greedflation.” The claim has been backed up by anecdotal and empirical data and it has been rebutted by anecdotal and empirical data. I will not try to answer the question of whether this allegation is true, but how its truth should be determined.
The “greedflation” claim is that corporations are using inflation as an excuse to raise their prices higher to jack up their profits, exceeding even the price level rise that otherwise would have occurred. In so doing they make inflation worse. Advocates of this view buttress their argument by pointing to heightened corporate profits and “sky high” gasoline prices. They say this is a result of increasing concentration of corporate power. Corporations, they say, in major industry sectors, have too much monopoly power, enabling them to manipulate prices.
The data-based evidence for and against the greedflation hypothesis
The anecdotal evidence for the greedflation hypothesis is that many corporations have racked up record profits of late. But in rebuttal, there are many other corporations that have not profited so well.
More broadly, the greedflation advocates point to the fact that corporate profits in aggregate have surged recently. But a graph of corporate profits adjusted for GDP shows that profits were just as high in 2012 when inflation was very low; thus, corporations achieved high profits at that time without the inflation excuse.
President Joe Biden recently used Exxon Mobil Corp as a whipping boy for the accusation that oil companies are exploiting high gasoline prices to increase their profits. Indeed, Exxon’s profits have recently surged. But the same dataset shows that Exxon’s profits were much higher 10 years ago when gas prices were lower than they are now; exploitation of price inflation can’t explain why Exxon’s profits were 60% higher then.
In other words, as is typical with hypotheses motivated by political goals, academic advancement, or the pursuit of profit, either side can muster data to back up their claims, rendering both sides “evidence-based.”
The problem with “evidence-based”
Why are debates of this sort with no possibility of resolution so common in finance and economics? Wouldn’t it be true that if all the evidence backs one claim and the counterclaim is exactly the opposite then the claim has won out against the counterclaim? But this kind of indeterminacy is rampant even when the statistical evidence is strong, e.g. exhibits high t-statistics; witness, for example, the differing answers to the question of whether “sustainable” investments outperform or underperform the market.
The root of the problem is that we have elevated naked empiricism above theory.
The origin of this trend, which has completely and detrimentally overtaken the field of economics, lies in Milton Friedman’s essay The Methodology of Positive Economics, published in 1953.1Friedman’s argument can be paraphrased as that “a scientific theory (hypothesis or formula) cannot be tested by testing the realism of its assumptions. All that matters is the accuracy of a theory’s predictions, not whether or not its assumptions are true.”
In other words, a theory’s assumptions don’t matter; all that matters is whether it proves valid in a test – that is, when validated empirically. This led economists on a wild goose chase to immediately “test” all hypotheses and predictions by crunching data, without sufficient attention to the well-known problems of data analysis such as: randomness; data errors; the changing nature of data so that analysis of past data is an unreliable indicator of the future; the outsized influence of outliers; and the myriad complicating factors inevitably ignored by any data analysis.
A few years ago, I had a delightful two-hour lunch, arranged by my friend George Peacock who was then president of the Georgetown alumni association, with the economics Nobel Prize winner George Akerlof after I had written a review of his enjoyable book, coauthored with Robert Shiller, “Phishing for Phools.”
During our conversation I mentioned that I was becoming skeptical of “evidence-based.” Akerlof immediately shot back, “I’m getting skeptical about … data.” I took that as agreement that the economics profession had become too obsessed with the analysis of data.
The apotheosis of that obsession – that inflexible fixation – lies in a crucial insight into one of the central contributors to the financial crisis of 2007-2009, an insight I have cited before.
In an April 27, 2008, article in The New York Times Magazine, journalist Roger Lowenstein illustrated this fixation, and its absurdity and the damage it could cause, very clearly:
Moody’s did not have access to the individual loan files, much less did it communicate with the borrowers or try to verify the information they provided in their loan applications. “We aren’t loan officers,” Claire Robinson, a 20-year veteran who is in charge of asset-backed finance for Moody’s, told me. “Our expertise is as statisticians on an aggregate basis. We want to know, of 1,000 individuals, based on historical performance, what percent will pay their loans?”
Anybody with an ounce of common sense would recognize that the historical performance of loan recipients based on the data was not a good number to use for the probability that subprime mortgage recipients in 2005 and 2006 would repay their loans in the future. The quality of those loans had declined precipitously due to the on-selling of loans by mortgage brokers and others to packagers who repackaged those loans into collateralized debt obligations, or CDOs. The officials at Moody’s probably knew that, but their hands were tied by their obligation to base their estimates on the “data.”
An economist named Frank Hollenbeck stated the problem well in a 2016 essay titled “The limits of empirical economics”:
…positive empiricism in economics is very limited and in many cases useless.
So what is the economist to do? He goes back to theory, realising that empiricism is there to assist theoretical work but not to be confused with the foundation or replacement of economic theory.
What is a better alternative?
In the case of Moody’s, the obvious alternative would have been to send some of their number-crunchers, or other employees, out to visit a random sample of subprime mortgage recipients to assess the likelihood that they would repay their loans. But of course, the result would have amounted to a smaller quantity of “data” than their database of historical loan repayments contained, and it would have required more than a dose of mere judgment – anathema to the data fanatics, which unfortunately, virtually all economists now are.
Back to greedflation
But what about the case of the claim of “greedflation”? Here, rather than immediately diving into data, as economists are wont to do, at least since Friedman, economists should ask the obvious question, the one Henny Youngman popularized, “Compared to what?”
The greedflation claim is that corporations are jacking up prices more than they should. That, quite obviously, should immediately elicit the question, “More, compared to what?”
The answer to that question lies in the phrase, “more than they should.” More than they should compared to what?
More than they should in theory, of course. What else could it mean? It could mean “more than they morally should” – and that is really what the greedflation advocates mean – but that claim can never be adjudicated because how much they morally should raise prices is not a question for scientific or even economic adjudication but for philosophy.
Therefore, to answer the question “Compared to what?” we must refer to basic economic theory. Let us now consider an example of a possible way to do that.
Corporations will, of course, raise prices if the market will bear it. Our commitment to capitalism and free markets allows that this is okay (it is “fair”) as long as the market is one that is free, competitive, and transparent. Such a market, and how prices are set in it, is analyzed in Economics 101 under the heading of supply-demand-equilibrium theory.
An analysis of prices when there is an obstruction in the supply chain for lower-cost providers
Let us take the example of oil prices. Recently, the market for oil has been disrupted by the fact that the supply from Russia to Western countries has been curtailed because of the Russia-Ukraine war. As a possible way to determine what oil prices should be, and as an avenue to determining whether prices are too high compared with what they should be, we’ll take the theoretical approach.
Suppose Figure 1 shows demand-supply equilibrium for oil before an obstruction in the supply chain causes some low-cost providers to be removed. The profits of the higher-cost providers are in triangle ABC. The numbers used are arbitrary, but within the realm of current reality.
Figure 1.
Oil demand and supply before supply chain disruption removing some low-cost providers
Now suppose that Figure 2 shows demand-supply equilibrium after the removal of some providers.
Figure 2.
Oil demand and supply after supply chain disruption removing low-cost providers
In this example, which is obviously based on a certain set of assumptions, price increases by about 9.8% from $102.50/bbl before the obstruction to $112.50 after, but the profits of the remaining providers increase about 27% from $3.0 billion before the obstruction to $3.8 billion after.
This is the profit these oil producers can earn fairly under the usual assumptions of free market economic theory.
It goes without saying that this result is the product of the assumptions I have made. But it is illustrative. Other assumptions could be made that may move the example closer to the reality of the situation. But the point is that it provides a basis for answering the “Compared to what?” question. The flinging of empirical findings back and forth that is the usual stuff of economic debate does not answer that question in a way that even comes close to enabling a common benchmark to be agree on. In the empirical form of the debate, those debating on each side are implicitly assuming some “Compared to what?” benchmark. But since that benchmark is never explicitly stated the participants in the debate, each of whom has their own benchmark implicitly in mind, are inevitably comparing apples to oranges.
The overemphasis on data devoid of theory and the overuse of statistical analysis and multiple regression have been serious mistakes in the field of economics, indeed in some other fields as well. This error will be difficult to correct, given the tyranny of metrics that is out of control in our quantification-obsessed society.
Economist and mathematician Michael Edesess is adjunct associate professor and visiting faculty at the Hong Kong University of Science and Technology, managing partner and special advisor at M1K LLC, and a research associate of the Edhec-Risk Institute. In 2007, he authored a book about the investment services industry titled The Big Investment Lie, published by Berrett-Koehler. His new book, The Three Simple Rules of Investing, co-authored with Kwok L. Tsui, Carol Fabbri and George Peacock, was published by Berrett-Koehler in June 2014.
1M. Friedman, Essays in Positive Economics 3-43 (1953).
Read more articles by Michael Edesess