Home » Blog » The mystery of the “Red Potato Chip Study” Part 1

The mystery of the “Red Potato Chip Study” Part 1

Picture is the author attempting to replicate the red chip protocol with his children acting as lab assistants.

Prior to Brian Wansink being relieved of his duties due to scientific misconduct and his subsequent resignation, he ran the “Red Potato Chip Study” (Geier, Wansink & Rozin, 2012). The study claimed that people would eat fewer potato chips, and therefore be more healthy, if there were red chips mixed in with the pale yellow chips. The idea is that the red chips would act as a stop sign to signal to the chip eater’s unconscious mind that they had had enough. The study has long been discredited but Health Psychology for whatever reason (it may be a very good reason by the way) has not taken any action on it. Because it has not been retracted, the study recently gained new life by substantially propping up the effect sizes of a meta-analysis of nudge interventions published in PNAS (Mertens et al. 2022).  One of the interesting aspects of the study is the huge effect sizes.  The four effects from the paper that Mertens et al. (2022) chose to include in their meta-analysis were: d=3.0, d=2.8, d=2.1, d=1.9. These effects would normally be considered to be way too large for any kind of nudge intervention so it is indeed a suspicious situation.

The criticism

The Red Potato Chip Study was heavily criticized for numerical inconsistencies by James Heathers, one of the data sleuths who outed Wansink originally. I can’t attribute it but somehow people started talking about the possibility that the study never happened and that it might not even be possible to create red, stackable potato chips following the protocol in the study:

Richard Morey subsequently made three attempts to replicate the red potato chips: attempt #1 | attempt #2 | attempt #3. Despite his best efforts, including several deviations from the original protocol to give it more of a chance of success, he was unsuccessful in recreating a red, stackable potato chip that looked and tasted like the original chips except for the color difference which was intentionally manipulated.

Our follow-up to the criticism

Building on Richard Morey’s initial experiments, I assigned my class of 36 marketing research students to attempt to replicate the red potato chip stimuli. They were provided with stackable potato chips, red food coloring, and six weeks to complete the project working in teams of 5-6 students. They also had the advantage of learning from Morey’s approach. They were instructed to exactly follow the described experimental protocal (see screenshot above) but also to run trials with subtle deviations that Wansink may have done but forgot to mention when writing up his protocol. Some of their deviations included using a mold so the chips wouldn’t flatten out when drying, sprinkling salt rather than dipping them in saltwater, high temperature drying (rather than low), and using an air fryer. I was surprised to learn that air fryers are standard cooking appliances in midwestern college student apartments.

The results were not incredibly promising. Here is a preliminary overview:

  1. It is very easy to get the red color
  2. It is not possible to get the shape right without a mold. Remember that the chips need to be stackable. The original protocol doesn’t mention a mold.
  3. No team was able to get the flavor right by dipping the chips in the saltwater. The chips simply lose more salt than they gain. The only team to successfully match the flavor actually sprinkled salt on the chip. I spoke with one team that used water that was saturated with salt to the extent that the salt was settling out to the bottom. So it can’t be done by following the protocol but sprinkling seemed to work well.
  4. Size was an issue no matter what. It seems that when you dip the chips in water they shrink 1/8″ (3.175 mm) all the way around the chip no matter what you do. It’s very noticeable.
When I asked my class who thought the experiment really happened, none raised their hand. The majority raised their hands voting that it never happened. One student pointed out that you never really know because there could be something that we didn’t try. That was the same student that pointed out, though, that it would make no sense to use a tomato basil chip to do the same thing in Study 2 if their chip coloring process really worked because that would add another confound (flavor differences). These undergrads will surprise you with how smart they are sometimes!

Overall assessment

I’ve been doing a lot of meta-analytic work lately and I’ve learned that there are two ways to get massive effect sizes from subtle interventions:
  1. Data fabrication
  2. Confounds
First, consider the confound explanation. Maybe the chips didn’t look or taste quite right. People were suspicious. Maybe the way the red chips were presented freaked people out and they didn’t want to eat them (see screenshot below).

If the red chips were different enough, suspicious in any way, or disgusting to the taste, that could explain why people ate far fewer in the red chip condition.

Second, consider data fabrication. The numerous numerical inconsistencies pointed out by James Heathers and poorly described experimental protocol are consistent with patterns we see in data fabrication. I should also note that the effect sizes in Wansink’s subtle intervention studies are too large to be believed generally. See how they compare to the effect sizes of the other nudge interventions below.
These are the data from Mertens et al. 2022 meta-nudge study

At the end of the day, it may not matter whether the results are caused by an improper study design resulting in a confound, or due to data fabrication. It’s clear that the results shouldn’t be trusted.

Next steps

Next, I plan to build on top of my students’ trials by taking what worked the best, and trying it out with the students in my lab. We will give it our best go to get the red chips to work and be stackable.

Why should anyone care?

I believe the meta-nudge paper (Mertens et al., 2022) showed that the paper continues to have a large influence on science and if it is fatally flawed it should be more thoroughly addressed. It’s not okay to simply leave scientific papers like that to exist as zombie papers because graduate students will waste their precious time and resources on trying to get them to replicate, and meta-analyses will give people the wrong impression. We need to be able to deal with these problems in a better, more thorough way.

Leave a Reply

Your email address will not be published.