The Failure of Behavioral Science

Daniel Kahneman and Richard Thaler won Nobel prizes for their work in behavioral science, propelling that discipline to the forefront of the advisory profession. But new research shows that behavioral science produces results that are no better than a simple model, and mutual funds based on it are no better than an index fund.

Behavioral science is the study of human behavior through systematic experimentation and observation. Behavioral scientists do the following: study why people sometimes behave in ways that may not maximize their own well-being, such as making choices in the present that do not maximize their happiness in the future; examine how seemingly arbitrary contextual factors influence people’s decisions, beliefs and attitudes; test how different incentives affect people’s motivation and behavior; analyze how people judge others’ traits and characteristics based on features of their face or voice; investigate how consumers can be encouraged to make, avoid or change spending decisions; and design policy interventions that can help people make choices they would personally recognize as optimal in the long run. Among the disciplines that fall under the broad label of behavioral science are anthropology, cognitive psychology, consumer behavior, social psychology, sociology and behavioral economics/finance.

Since we expect experts in any field to make accurate predictions within their domain of expertise and thus rely on them when making decisions, an important question is: How accurate are the forecasts of behavioralists? Dillon Bowen sought the answer to that question in his August 2022 study, “Simple Models Predict Behavior at Least as Well as Behavioral Scientists.” He analyzed data from five studies in which 640 professional behavioral scientists predicted the results of one or more behavioral science experiments. He compared the behavioral scientists’ predictions to random chance, linear models and simple heuristics like “behavioral interventions have no effect” and “all published psychology research is false.” The five studies included:

  • An exercise study that examined 53 behavioral nudges to encourage 24-Hour Fitness customers to exercise more. Ninety practitioners from behavioral science companies were then asked to predict how effective each nudge would be.
  • A flu study that used 22 text-message treatments to encourage Walmart customers to get a flu vaccine. Twenty-four professors and graduate students, most affiliated with top 10 business schools, were then asked to predict how effective each treatment would be.
  • A study that assembled data from 126 randomized control trials (RCTs) from two of the largest nudge units in the United States. The researchers measured the RCTs’ effectiveness as the percentage point increase in adopting a target behavior compared to a control condition. Two hundred thirty-seven behavioral scientists from academia, nonprofits, government agencies and nudge units were then asked to estimate the effectiveness of 14 randomly selected RCTs.
  • An experimental study that measured how much effort participants exerted in a key-pressing task in 18 experimental conditions. Participants scored points for alternating between pressing “a” and “b” for 10 minutes (participants earned one point each time they pressed “a” then “b”). The experimental conditions were monetary and nonmonetary incentives to score points, such as piece-rate payments, time-delayed payments and peer comparisons. The researchers measured the effort participants exerted in each condition as the number of points they scored. Two hundred thirteen academic economists were then asked to predict the results.
  • A study that attempted to reproduce the results of psychology studies. The researchers defined a replication as successful if it obtained a p-value of less than .05 and the estimated effect’s direction matched the original experiment. They then asked 76 psychology professors and graduate students to predict how likely 44 of the studies were to replicate successfully.