Detecting bias in marketing (and related) studies

When we say “bias” we’re talking about the Four Horsemen of the Replication Crisis: Publication bias, p-hacking, low power and HARKing (hypothesizing after results are known). The commonality between all of these forms of bias is that they result in p-values that are barely significant (usually between .01 and .05). Studies with such p-values are unlikely to replicate. We usually can’t tell which of the four horsemen have caused the bias but we can see it’s there when we look at a series of studies in aggregate and notice a trend of high p-values.

Can the effects be real though?

Some have speculated informally that perhaps the results are barely significant because people are really good at predicting a priori how large of a sample size they might need and they’re economizing. This idea rests on three false assumptions: (1) effect size is consistent across different contexts, (2) researchers can accurately predict the effect size, and (3) researchers are cool with tons and tons of false negative studies (this would be very wasteful and unfruitful). A couple of resources to shed more light on this: Psychology’s Renaissance (2018) by Nelson, Simmons and Simonsohn and a blog post by Quentin Andre.

What evidence is there for bias in marketing research?

Here is a list of all of the bias studies in marketing that I am currently aware of. The evidence is overwhelming.

TitleRelevant conclusionAccessYear posted (or published)
Tighter nets for smaller fishes: Mapping the development of statistical practices in consumer research between 2011 and 2018Bias in marketing journals has not decreased in light of the replication crisis.Read the preprint2021
Noise in the process: an assessment of the evidential value of mediation effects in marketing journalsMost mediation tests in marketing are too close to zero, signaling massive, systemic bias.Read the preprint2021
If Consumer Psychology Wants to be a Science It Has to Behave Like a ScienceZ-curve shows that relative to every other field of psychology, marketing studies show poor evidential quality based on their distribution of p-values (too many are too close to p=.05).Visit blog post2021
What works to increase charitable donations? A meta-review with meta-meta-analysisDisappointing results from a meta-meta-analysis of interventions designed to increase donations: Most effect sizes are slight (mean r = .08) with evidence for publication bias.Read the article2020
Message Design Choices Don’t Make Much Difference to Persuasiveness and Can’t Be Counted On—Not Even When Moderating Conditions Are SpecifiedMassive systemic bias in persuasive messaging research.Read the paper2021
No reason to expect large and consistent effects of nudge interventionsBig problem with bias in nudge intervention research.Read the letter2022
No evidence for nudging after adjusting for publication biasTitle says it allRead the letter2022
Left-truncated effects and overestimated meta-analytic meansMassive, systemic bias in nudge literature.Read the letter2022
We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell U about Publication Bias and p-Hacking in Online ExperimentsMassive systemic bias in the marketing literature as indicated by a spike in z-scores right above 1.96 (p < .05).Read the article2022

Theory- and context-level bias studies that are relevant for marketing