The basic concept of bias detection is very simple. If you have too many studies that are “straining for significance,” meaning they are just barely significant, it looks suspicious. It’s unlikely to see that with true effects and is the expected pattern when you have systemic, widespread p-hacking going on. With true effects, we should expect most of the effects to be p < .001. Studies with p = .045 are unlikely to replicate.
Tighter nets for smaller fishes: Mapping the development of statistical practices in consumer research between 2011 and 2018
Marketing has not acted to reduce bias in response to the replication crisis. Read the preprint.
Noise in the process: an assessment of the evidential value of mediation effects in marketing journals
The bulk of mediation tests in marketing journals just barely exclude zero. That’s very bad news for evidence quality. Read the preprint.
If Consumer Psychology Wants to be a Science It Has to Behave Like a Science
Z-curve shows that relative to every other field of psychology, marketing studies show poor evidential quality based on their distribution of p-values (too many are too close to p=.05). Visit blog post.
What works to increase charitable donations? A meta-review with meta-meta-analysis
Disappointing results from a meta-meta-analysis of interventions designed to increase donations: Most effect sizes are slight (mean r = .08) with evidence for publication bias. Also there are “compelling reasons to doubt generalizability” beyond the lab. Read the preprint.
Message Design Choices Don’t Make Much Difference to Persuasiveness and Can’t Be Counted On—Not Even When Moderating Conditions Are Specified
It doesn’t seem to matter which message framing you choose. Read the paper.
No reason to expect large and consistent effects of nudge interventions
A reanalysis of the PNAS meta-nudge data showed massive bias. Read the preprint.