Behavioural science risks leading decisionmakers astray if its findings are overhyped.

Quirks are quotable. The popular media voraciously report on behavioural research described as showing how our decision-making can be buffeted by seemingly minor cues. We read how we can be unwittingly nudged to make choices by the size of plates in restaurants, the placement of products on shelves, the background music in advertisements, the ambient temperature when choosing colours and memorable movie scenes when dating. Experiments demonstrating decision-making biases can be fascinating, especially when they are accompanied by engaging personal stories (”Turns out I was worried about all the wrong things”) or images (think of those irrationally irresistible chocolate truffles).

Demonstrations of susceptibility to subtle influences are important. Even tiny changes in individual decision-making can mean a lot. Shifting the behaviour of just a small percentage of consumers can help mean the difference between profit and loss. Elections sometimes swing on small changes in voter turnout. Smart messaging can help public health officials increase vaccination rates or encourage employees to save for retirement, making big differences in individual lives and society as a whole. Those possibilities — for public policy as well as for profit — have led to a burst of enthusiasm for behavioural research under the premise that it can unlock the secrets to minor advantages.

But before policy-makers, political consultants and modern Mad Men go trolling through academic literature for behavioural gold, they should note that the subtle cues that work in a lab are not always as effective in the real world.

And before those of us in the lab promise too much, we should recognize the risks of leading decision-makers astray and discrediting our science by overhyping the practical importance of our research, however sound its foundations.

fischhoff img1

In psychology, as in biology, successful experimentalists are skilled at getting the effects that they want. Biologists know how to grow the organisms that interest them, while suppressing others. Psychologists know to how isolate the cues that interest them, while holding other factors constant.

That skill allows experimentalists to focus on the effects that matter to them, by making them as large as possible while excluding alternative explanations of what they observe, and by controlling them as much as possible. That control distinguishes experimental biologists from epidemiologists, who must struggle to find a signal in the noise of the complex world that shapes health and disease. And it separates experimental psychologists from economists, who must contend with masses of data as they try to deduce the beliefs and preferences that people reveal in their choices.

But the price paid for that control is the difficulty of generalizing findings from the rarefied conditions of the lab to the complex world in which life transpires. The behavioural researcher may discover that experimental effects are like orchids: elegant, replicable and theoretically informative, but not easily reproduced or observed outside the greenhouse of the lab.

In the lab, a single cue such as a reminder of our own mortality might tip an uncertain and unimportant decision: say, how much candy to eat, how hard to work on an experimental task or what to say about our intentions to exercise. Outside the lab, though, we may be bombarded by competing cues that affect our choices: the reminder of a sad movie, the sight of an infuriating politician, a reaction to the colour red or concern for the well-being of future generations.

As a result, it’s hard to predict what anyone will do in any specific real-world situation without knowing all the cues that are present and how potent they are. Scientists in a sub-sub-field continue to study the nuances of their favourite cue, in order to understand just how it works. Gradually, they learn how various factors affect it: Seeing what others do? Being paid for ”the right answer”? Having prior experience?

That knowledge allows them to make better — but never firm — predictions about what will happen in the real world. This results in predictions such as ”Older people will probably behave like college students, unless perhaps they pay closer attention to unusual cues or have prior experience or have more stable emotions or…”

Over-the-top reporting can be explained in cognitive and motivational terms.

Having trouble extrapolating from the lab to the world should not discourage behavioural researchers. Indeed, the very difficulty of replicating lab results demonstrates the power of subtle changes in cues, which creates opportunities for future research that potentially uncovers new processes and cues and the reasons for limits to familiar ones.

In a story familiar to psychology students, Clever Hans (der kluge Hans in German) was a turn-of-the-20th–century-horse that appeared capable of doing maths — but only for its trainer. It was a comparative biologist and psychologist Oskar Pfungst, who traced the horse’s apparent abilities to detect subtle (and perhaps unwitting) cues in the trainer’s body language. That discovery — the ”Clever Hans” effect — helped spur research into nonverbal communication.

Conversely, if nothing affects an effect, it is hard to learn how it works. My colleagues and I once experienced a maddeningly robust result that turned out the same way, no matter how we varied the experimental conditions.

We were studying how people assess the limits to their own knowledge. A typical research item would be: ”Is absinthe (a) a liqueur or (b) a precious stone? Choose the correct answer. Now give the probability, from 50% to 100%, that your answer is the right one.” We ran the study in many ways. But whatever we tried, the most important variable was always how difficult the questions were. People tend to be overconfident with hard questions and underconfident with easy ones.

It took studies that disrupted this pattern to suggest its sources. For example, asking subjects to think about why they might be wrong found that people are unduly swayed by reasons supporting their chosen answers and do better when they stop to think. Giving people aggregate feedback (”You’ve been wrong 20% of the time when you’ve been 100% confident”) revealed patterns that did not emerge naturally. Decision-making research, it turns out, is as complicated as the decisions it studies.

What, then, is the public to make of those behavioural studies that find their way onto news sites and sometimes go viral? In terms of informing personal or public policy decisions, learning about any research should be to the good. It shouldn’t hurt to know something about how, say, plate size can signal how much is normal to eat, or how anger can blind us to reasons for our problems other than the source of our ire.

But neither should we expect too much from these discoveries. No single factor is the whole story for any decision, and expecting more is a formula for failure, as a result of unwarranted faith in simplistic explanations or policies.

Like any other behaviour, over-the-top reporting can be explained in cognitive and motivational terms.

Cognitive explanations consider the effects of natural ways of thinking. For example, people tend to see events as more likely when those events are easily remembered or imagined. Reliance on this mental shortcut (known as the availability heuristic) is generally effective. But it can also produce biased judgments. Vivid crime reports can exaggerate a sense of danger. Creative news reports can make minor psychological effects highly imaginable. Scientists who see an effect every day can forget how skilled they are at creating it.

Motivational explanations consider the effects of desires on behaviour. Thus scientists and reporters may deliberately oversell their stories, perhaps arguing that their audiences expect hype and therefore know how to discount it. Or they may unwittingly be less critical of evidence that supports their story than of evidence that does not. They may even be succumbing to the temptation to imagine the results of studies that have yet to be conducted.

Good public policy cannot be based on intuition. Demands for evidence-based public policy must seek out research based on many studies, conducted in diverse settings, by scientists with different perspectives. Such complex collaborations, which pool evidence from multiple sources, are normal in engineering but uncommon in the social sciences. Without them, however, it’s impossible to give simple ideas the detailed attention needed to turn them into viable policies.

The ability of individual scientists to exercise control in lab settings allows them to produce vital insights into processes that affect how we make decisions. But to use those insights to improve decision-making in complex, real-world situations where such control is impossible, we will need to draw on the processes studied by many investigators. Without such collaborative research, disciplined by rigorous empirical evaluation, the nascent behavioural revolution in policy will fall short, failing to live up to its potential, and will become just another disappointing fad.