The shaky behavioral science research that Google fell for
You’re probably familiar with the research on mindless eating—especially the study with the “endless soup bowl”. In that study, researchers had participants come into a restaurant and eat from a soup bowl that, unknown to them, was connected to a large vat of the stuff. As participants enjoyed their lunch, the bowl would slowly fill back up. How much more soup do you think these participants ate, on average, than those with a normal soup bowl? According to Brian Wansink, head of the Cornell University Food and Brand Lab, a lot: 73%.
Well, the lab that did these studies is under investigation for creating a number of papers with “…serious errors, reporting inconsistencies, impossibilities, plagiarism, and data duplications" (see more here). According to Tim van der Zee, a PhD candidate at Leiden Univeristy:
“…there are currently:
- 42 publications from Brian Wansink which are alleged to contain minor to very serious issues,
- which have been cited over 3700 times,
- are published in over 25 different journals, and in 8 books,
- spanning over 20 years of research.”
You may ask: What’s the harm?
Well, this lab’s research is the basis for a number of health interventions, including Google’s decision to nudge employees to use smaller plates in their cafeterias. As EatingWell.com reports: “Directly above the plates hangs a sign with a gentle reminder that people who use larger plates tend to eat more. The result: a 32 percent uptick in small-plate use.”
Given the lab’s sloppy research practices, I think it’s highly likely that this study will also turn out to be false.
Situations like this are a wake-up call for us who work in the behavioral sciences. They’re a reminder of the need to stay ever-vigilant and think critically. In that spirit, I have an article coming out for the “Member’s Only” section of Medium in the next couple of weeks. It lays out some principles you should use in order to think properly about psychological / behavioral research. I’ll send a link to the full piece when it’s out.
The first principle I put forth is this: If it sounds like magic, it probably is.
The second principle I lay out is: Small changes generally do not lead to large results
There are exceptions to each of these, obviously. For example, small changes done over an extended period of time CAN lead to huge results. I’m referring to all of the small tweaks that get absurd media attention, such as the “putting eyes behind a donation jar increases donations by 48%” tweak.*
Small changes generally do not lead to oversized effects like that.
The one glaring exception that everyone will point to is defaulting. But I don’t consider most of the popular examples of defaulting to be valid examples of behavior change. Automatically opting someone into an organ donation program because they overlooked a checkbox doesn’t really fit most peoples’ definition of behavior-change—you’re not actually getting anyone to do anything. You’re either doing stuff on their behalf or getting permission to do things to them (after their dead, in the case of organ donation); often by using tricky form-design. But that’s another article for another time.
The bottom line: If something sounds too good to be true and isn’t from an area of research with a long track record (and plenty of replication), I’d think twice before implementing it. At the very least, I’d do a well-controlled study of my own to verify.
Be careful out there.
* A good meta-analysis on this found zero effect: http://www.sciencedirect.com/science/article/pii/S1090513816301350