Beware of Behavioral Economics

Beware of Behavioral Economics

In April 2020, James Clear went on Sam Harris’ podcast to talk about his excellent book, Atomic Habits. In the interview, he spoke about my definition of habit formation and described me as a behavioral economist. While I’m a big fan of James’ work, and appreciated the shoutout, I squirmed a bit when he labeled me a behavioral economist.

You see, these days I’m not a big fan of behavioral economics as a discipline. I used to proudly wear the “behavioral economist” label. But in the past few years I’ve become increasingly dismayed by what I see as a consistent pattern of overpromising and underdelivering.

Currently, the best research shows that behavioral economic interventions have, on average, a 1.4% impact on outcome variables in the real world. This would be fine if these interventions advertised a 1.4% impact. The problem is that the studies these interventions are based on promise an average impact of 8.7%.

This means that behavioral economics interventions are ~16% as powerful as claimed. That’s shocking.

What would you call it if you purchased a cholesterol medication that promised to reduce your cholesterol levels by 50%, but only reduced it by 8%?

What would you call it if a financial advisor promised you average annual returns of 10%, but only ended up providing you a 1.6% annual return in the coming decade?

In areas of life that matter, such as health and finance, we have no problem calling this what it is: false advertising.

But in the academic realm, where the stakes are much lower, we let this behavior slide. Maybe it’s because we subconsciously don’t take academics very seriously, or perhaps it’s because we just assume that the work they’re doing is so complicated that a mistake is bound to happen along the way. Whatever the reason, we’re surprisingly forgiving.

The problem is that those of us who take this research and use it to make multi-million (or billion) dollar decisions are assuming that it’s being done with care. We’re assuming that the academics who are publishing are doing everything they can to make sure that the experiment was designed correctly, that the recruiting was rigorous, that the analysis is appropriate, and that we’re seeing the full data sets that were collected (file drawer problem).

Unfortunately, this doesn’t seem to be the case.

As has been covered many times before, the behavioral sciences have a dreadful replication rate. Fewer than half the studies that are published are able to be replicated. The actual number is closer to 36%.

We’re lucky to have these figures, since replications are few and far between. It doesn’t help one’s career to do replications. If you’re a young researcher looking to make a name for yourself, you’re not going to get very far by rehashing old ideas—you need your own surprising findings to get citations and build a niche. And if you go around trying to see if other studies in your field can actually be repeated, you’re not going to make many friends. Already well-known academics with power in the field are not going to sit by while you try and poke holes in their life’s work.

So we have a situation where practitioners, people in government and business, have real skin in the game when it comes to their work—they need to show real world impact on the problems they’re trying to solve. They need to base their decisions on a truly accurate, practical model of the world. And we have a situation where academics don’t have any real skin in the game. If their weak study is published, they get another paper on their publishing record. They get one paper closer to tenure. They get applause from their advisor or their department head.

But if they’re wrong are they going to cost their organization 20 million dollars? Are they going to get fired? If true data manipulation or fraud is discovered, yes. But if they can merely chalk it up as an innocent mistake or sloppiness, it will probably result in nothing serious. As I said before, it’s not like anyone is meticulously replicating every experiment that comes out. The chances you as an academic are going to be caught for doing sloppy work is close to 0.

Which brings us back to my core point: if you are trying to solve big, important issues, don’t rely on behavioral economics. You wouldn’t go to a financial advisor who has been caught overstating his or her returns by 6x. Why would you go to a field that’s been caught doing that very thing?

Until the delta between experimental results and real world results gets closer to 0, you should stay away. Your pocketbook, your teammates, and your shareholders will thank you.

Featured Articles

Hooked How To Form Habit Forming Products Is Wrong

Hooked: How to Build Habit Forming Products Is Wrong

Read Article →
Behavioral Science Consultancy: Why you probably shouldn’t hire one

Behavioral Science Consultancy: Why you probably shouldn’t hire one

Read Article →
​Here's Why the Loop is Stupid

​Here’s Why the Loop is Stupid

Read Article →
The death of behavioral economics

The Death Of Behavioral Economics

Read Article →