On Facebook, Nassim Taleb recently posted a mathematics-heavy PDF about how many of our so-called psychological biases are actually just biases of the researchers. I can’t download the paper today, the recently edited version won’t open. And I can’t add anything to heavy mathematics anyway. But Taleb’s comments give an introduction to some thoughts that have been stewing away since a late night discussion with Steve Johnson a few years ago. Taleb writes:
‘(M)any psychological "biases" are errors by researchers missing a layer of uncertainty in the model. And the researchers want the government to "nudge" us to make a mistake.’
He’s referring to work done by groups like Britain’s Behavioural Insights Team, called the ‘nudge unit’ in deference to a book co-authored by behavioural economist Richard Thaler. These initiatives are designed to save us from ourselves. The work of people like Thaler, Daniel Kahneman and Dan Ariely reveal fascinating insights into the human predicament and our capacity for misjudgement. We make plenty of seemingly dumb mistakes depending on how things are presented to us, and these researchers have a lot of knowledge to share. But Taleb is right that factors deemed ‘biases’ might sometimes be perfectly rational responses to complex situations.
Take loss aversion—the idea that losing $100 makes us, say, twice as miserable as a $100 windfall makes us happy.
Good researchers deal with it as a fact and focus on the ‘why?’ But others seem to give off the message that loss aversion is a clear fault, a quirk that we should work to stamp out. We should treat $100 windfalls and losses with equal helpings of happiness and misery. But there are numerous rational thought processes behind loss aversion.
For starters, accumulated money has a declining marginal utility—giving $1,000 to a destitute family is going to make more of an impact on their lives than giving the same to Rupert Murdoch. Someone of substantial means might be expected to feel little asymmetry between $100 windfall and a $100 loss—low loss aversion. But someone struggling will, perfectly rationally, feel much more asymmetry—the $100 loss will hurt them significantly more than $100 gain will help them.
You know who is often cash strapped? Students and other citizens who give up their time in exchange for $50 or free lunch so a behavioural economist can play a few games with them.
A perhaps more important point to understand about loss aversion is the asymmetry of rigged games. In a lab, a behavioural economist asks us whether we’d like to put $100 of our own money on the table. If a coin flip comes up heads, they’ll give us $205 and if it comes up tails, we lose our $100. The expected return on our $100 investment is an instantaneous 2.5%. The argument is that not taking that bet is irrational, especially if you’re not living on the poverty line. But many will reject it regardless, including rich people—why?
Well, how often do people come up to you in the real world offering such a bet? And when they do, do you hold suspicion that the coin might be weighted or otherwise rigged? That would be a rational response. And when such games are rigged, the advantage falls against the victim 100% of the time (the only person who might walk down the street offering a game of chance rigged in your favour is a behavioural economist).
So, when it comes to probability-based decisions, we need a sufficient upside to compensate for the possibility of rigged games. So, while we might take a 50:50 (or worse) bet in a regulated casino or against a trusted friend, we won’t on the street. Our choice is context-dependent and that’s sensible risk management in the face of uncertainty.
As goes a game of chance, so goes many of life’s more important decisions. Loss aversion may as well be encoded into our DNA given millions of years of reciprocation among the tribe. Sharing is a strategy that undoubtedly helped survival and reproduction in a hunter gather world. But suckers quickly exited the gene pool.
A good researcher won’t fall for the above simple mistakes. But there are infinitely more complex distortions that might well trip them up. The point of all this is not to question the whole field of behavioural economics but to highlight the risk that research identifies biases that are real in a lab but not in real life. And such research might encourage governments and other authorities to nudge us away from time-tested complex strategies to ‘better’, more optimised strategies formed in a lab, perhaps naively.
In the 1970s, several US departments and institutions took what they considered the best science available at the time and created dietary guidelines for the country. The seemingly sound basis was that if clogged arteries caused heart attacks, reducing dietary cholesterol would help prevent clogged arteries and therefore heart attacks.
Some factions involved were likely suffering from self-serving bias (the USDA?). But others, like Senator George McGovern, seemed genuinely motivated by the desire to improve the health of a nation. These efforts later morphed into the Food Pyramid and, relevant to this post, large scale government and private sector efforts (all over the world) to shift people towards what was considered a healthier diet.
This was a disparate, proto nudge unit which very successfully encouraged the western world to reduce consumption of butter, fatty meats, offal and other deemed no nos.
The perhaps unanticipated, but anticipatable, consequence was that we significantly increased our intake of simple, refined carbohydrates. With food companies spotting a profitable opportunity in the low saturated fat message, consumption of processed foods loaded instead with sugar and unsaturated (but inflammatory) vegetable oils also skyrocketed. Pasta and pesto trumped steak and vege for dinner. Cereal decimated bacon and eggs at breakfast.
It is the opinion of many researchers today, and the suspicion of this blogger, that this well-intentioned proto nudge unit deserves as much credit for the obesity and Type 2 diabetes epidemic as Coca-Cola and McDonalds.
Good intentions are not enough. The plan to nudge peoples’ decisions needs to be based on sound science with a eagle eye to unintended and/or unknowable effects – with a significant bias towards doing nothing in the absence of compelling evidence. I hope the Behavioural Insights Team and other such units popping up around the world are forced to study nudge policy failures in deeper detail than their successes.