Some deep thinkers in marketing have postulated that perhaps marketing studies are not replicable because of the ever-shifting context around consumption. This is an interesting and valid theory that has some merit. Certainly, the context has changed. There has been a tremendous shift in how we consume and what media we are exposed to in the last 100 years.
We don’t really need to go so far as to test this idea though because the overwhelming majority of failures to replicate marketing studies can be traced back to bias in the original studies that is easily detectable using very simple forensics such as p-curve. And by that I mean that marketing researchers are misusing statistics. Instead of using inferential statistics to test their ideas, they’re using them to find ideas. People are just running studies and tests until they obtain a significant result that is actually just random noise in most cases. The problem of bias in marketing research has been documented and substantially proven by many different teams.
The idea of shifting context was directly tested by the Data Colada team when they attempted to replicate 10 recent marketing studies in Journal of Marketing Research and Journal of Consumer Research. In that attempt, only one study unambiguously replicated successfully. Because the studies were very recent and the method was followed as closely as possible, context changes shouldn’t be a factor. Two of the studies failed to replicate because of confounds in the way the study was designed. In one of the two, there was an uneven drop out rate causing huge issues. In the other study, the instructions in one of the conditions led the participants to make the choice preferred by the researcher. This doesn’t tell us why the other 7 studies didn’t replicate but it does rule out the “changing context” idea.
More recently, a team in Japan attempted to replicate ten sensory marketing studies using Japanese samples. While it would seem likely that the changing context would cause low replicability, in fact, most of the replication results could be explained by simply looking at how significant the original studies were. You can eyeball it in this graph to see how far the red confidence interval is from zero. Only one replication failure is not explained by the graph. The other 90% of the results could be easily predicted based on a glance at this graph. That basically means bias (p-hacking, publication bias, low power, HARKing) explains the failures to replicate in 7 out of 8 failures.
Then there is Dubois et al 2012. The results in the original were highly significant and incredibly consistent. Definitely not p-hacking. Yet it didn’t replicate (see table 3 below) so something else is going on. This mystery will be explained in time too. And it’s not an issue of changing context by the way.
In conclusion, yes I do think that it’s possible that changing context can cause a replication study to fail. But no, I don’t think that’s what’s going on with all these marketing study failures. The problems are (a) bias (b) confounds and (c) fraud. Another issue is that it really diminishes the original study to say that it won’t replicate in a modern context, or if any variable is slightly tweaked. Like, if it only works for Harvard undergrads in 1998, what’s the point in even publishing the study in the first place.
Aaron Charlton, PhD, MBA is a marketing professional who currently works in industry for Away Clinic and Metascience & Marketing Lab and lives in Mesa, Arizona, USA. He is formerly an academic and still takes interest in improving the quality of research in the field of marketing.
Leave a Reply