First, definitely check out the 2021 marketing metascience year in review if you haven’t yet. This is an update of that review.
Let’s get into it! Just like 2021, 2022 has brought a deluge of new replications and bias studies, which is great because there had been very little effort to test the trustworthiness of marketing research prior to a couple of years ago. It’s mostly been bad news but in a way it’s good because at least we know where we stand so that we can do better.
More bad news for replicability of marketing research
Despite the unwillingness of marketing journals to publish exact/direct replications, dozens have now been run. They’ve been published in blogs, preprints, non-marketing journals, etc. To this date only one true replication study has ever been published in a marketing journal (at least according to the replication tracker). And that journal is now defunct. 🙁 But let’s look at the cumulative set of replications:
- 5 out of 43 high-powered, preregistered direct (match original study as closely as possible) have been unambiguously successful. That is an 11.6% success rate.
- The two effects that replicated this year were from a systematic replication of 10 sensory marketing studies (8 other studies in that set did not replicate).

I should note that marketing editors are open to “conceptual” replications, and even have a section of Marketing Letters devoted to it called “Replication Corner.” Personally, I think that’s great but conceptual replications by nature are designed to test generalizability and do very little to test the veracity of the original study. It also shows a lack of understanding of how easy it is to p-hack and how widespread p-hacking is in the field of marketing. This is where direct replication excels. Independent, third-party direct replication is the best tool available to ensure science is factual.
We are still waiting for OSF’s SCORE Project to weigh in on how bad (or good!) marketing is relative to other fields in terms of replicability. The SCORE Project is a massive multi-lab collaboration in which hundreds of studies are either replicated, their data analyses are reproduced, or both. I think all the replications have been run but not made public. All the inside information I have is that I heard from one replication team that they actually got a marketing study to replicate so that’s good news. Looking forward to seeing that!
Bias studies help explain why marketing studies aren’t replicating
“Bias” is the idea that people are reporting too many *barely* significant results. This indicates some mixture of p-hacking, publication bias, HARKing (hypothesizing after results are known), and low power. This means the type I error rate is inflated, which in turn makes the studies non-replicable. A study can be easily eyeballed for bias by checking how many of the critical hypothesis tests are .01 < p <.05. With true effects, most of your critical p-values should be < .001. Here are some of the highlights from 2022:
- With the introduction of cheap and quick online subject pools, sample sizes in marketing have increased substantially but effect sizes have decreased substantially. Replicability (which can be predicted by aggregating reported p-values) also has not improved since the start of the science reform movement circa 2011 (Krefeld-Schwalb & Scheibehenne 2022).
- Following publication of a meta-analysis of nudge studies in PNAS, three responses published in 2022 cast substantial doubt on the trustworthiness of this kind of behavioral intervention study generally: Stazsi et al 2022, Maier et al 2022, Bakdish & Marusich 2022. The issue is not necessarily that nudges don’t work but that the science in the area can’t be trusted. We don’t know if they work–or which ones work because of p-hacking and related issues. Of course these are not all marketing studies but many are so it makes one pause before wholesale acceptance of existing literature.
- A study looking at bias in Mturk studies showed that marketing compared poorly with several related fields (Brodeur, Cook & Hayes 2022).

See all of the bias studies in or overlapping with marketing.
JMR retracts a priming paper, calls priming a “cornerstone” of the field, then retracts 2 more priming papers
That cornerstone is slipping away!
Up until 2022, JMR (and all of AMA) had zero retractions but all of a sudden they got very busy retracting priming articles. By the end of the year, they had retracted (or re-retracted) three priming papers all together.
- Jayati Sinha’s 2014 JMR retracted due to impossible means (and possibly unexpectedly nonrandom last digits). Sinha now stands at 2 retractions and 2 expressions of concern. I reported that last one. I first took interest in Sinha’s work when I saw some comments on Pubpeer calling attention to massive effect sizes in some of her papers. I noticed that they were priming papers and I was immediately interested because I had never seen a priming study that wasn’t p-hacked before. P-hacking is easily detectable in published papers using very simple forensic techniques such as P-Curve and Z-Curve. Fascinating stuff!
- Dirk Smeesters’ 2010 JMR was retracted in 2014 due to data fabrication, unretracted in 2018 due to mistake while switching publishers and re-retracted in 2022. Smeesters was caught fabricating data by Uri Simonsohn and run out of the field in 2012 or 2013.
- Fei Gao got a paper retracted for using different data for his analyses than what was in his Qualtrics account. Apparently the university you work for (a) owns your Qualtrics account and (b) can access it at any time, including deleted files. And by the way, a good way to attract data sleuths is to use the word “priming” as the first word of your title: “Priming the Concept of Fullness with Visual Sequences Reduces Portion Size Choice in Online Food Ordering.”

The Fei Gao paper was originally announced by the JMR co-editor with the subject title “properly primed” and priming research was hailed as a “cornerstone of consumer behavior scholarship.”

Is it time to move on from priming?

Still no priming study has replicated. Yet marketing journals continue to pump out priming studies, and as I said above, it was called a “cornerstone” of the field by an editor at JMR in 2022. “Priming” is when you have a 2-stage manipulation in which you first try to change the person’s mindset (e.g. collectivist/individualist, moral/immoral, lonely/not lonely). Typically in marketing journals, in the second stage of the experiment, the participant is asked to rate how willing they would be to purchase a product in a second manipulation that is by all appearances completely unrelated to the original prime.More on this topic: No priming study replicated without original author involvement.
Updates on other data anomalists

Nicolas Guéguen, a marketing professor at Université Bretagne Sud received his 3rd retraction in 2022. Although he publishes mainly in social psych he is a marketing professor. He also has 8 expressions of concern. He’s the one who publishes silly studies about whether subtle things like red dresses make women get treated differently. Probably time for the university to look into it… See a catalog of all the articles about marketing’s most notorious working professor.
Although world-famous marketing professor Dan Ariely has not had any further retractions after the one in 2021, he is definitely under a lot of scrutiny. A recent Israeli news show took him to task over issues with (a) his pedophilia/porn/masturbation study and (b) his impossible shredder that doesn’t shred. He also keeps saying things that make people even more suspicious (if that’s possible).


A debate exploded on Twitter about whether it was good to include ex-marketing professor, Brian Wansink‘s, unreliable work in meta-analyses. Wansink was fired for making up data among other things. This discussion spilled over into other issues including whether it’s appropriate to even do a meta-analysis of such a set of seemingly unrelated studies. I cannot do justice to all of the debates but you can check out the responses to the PNAS meta-nudge paper, blog posts by Andrew Gelman and a series called “meaningless means” by Data Colada.
The PNAS meta-nudge data 🙂 You can see that Wansink’s effect sizes are generally too big for a p-hacker. He must have been up to something else. I wonder what it was… cc @sTeamTraen BTW sorry to beat a dead horse but there seems to be a lot of doubt about what happened! pic.twitter.com/w8rpml4oST
— Aaron Charlton (@AaronCharlton) January 7, 2022
AMA is unfortunately cultivating the perfect environment for fraud
AMA is the top association in marketing on both the practitioner and academic sides. They have five top journals including two of the top 4 (JMR & JM). The year 2022 has not been super great for transparency and data integrity at AMA. Despite being forced to retract three papers at JMR in 2022 due to data integrity issues, AMA (American Marketing Association) has taken several steps backward, away from key scientific values of transparency and neutrality. By the way, in all three cases, the problems were discovered by outside analysts not associated at all with AMA. Rather than admitting they have a problem, they’ve taken steps that would curtail such sleuthing activity.

- Journal of Marketing codifies its closed data mandate. Going forward, even reviewers are forbidden from seeing the data. Only the editor may see it. Note that in Journal of Consumer Research, there are ways that reviewers can see the data. So if the JM policy was based on JCR, they left out the good stuff when they copied it.
- American Marketing Association’s unfortunate fraud policy was posted . The policy (a) discourages authors from sharing data with 3rd parties, (b) bans 3rd-party reporters from seeing any non-public information, (c) disallows anonymous fraud reporting, and (d) seeks to silence people who report fraud. This is in conflict with COPE, of which all AMA journals are signatories. COPE provides for anonymous fraud reporting.
This is all really bad news for scientific progress because it makes it all too easy for fraudsters to run amok. As other journals move toward open science practices, the weaker journals, such as the AMA journals, will start to attract a disproportionate share of riffraff.
Modern scientific best practices are beginning to be adopted at some non-AMA journals
Marketing Letters (ML). Their new data policy requires open data and incentivizes preregistration. No other marketing journal has open data requirements. I also saw a tweet by John Lynch that they would begin accepting registered reports of conceptual replications. And the journal allowed for a surprising amount of criticism of the field when it published a bias study (Krefeld-Schwalb & Scheibehenne 2022). Bias studies are seen as very threatening because nobody in marketing wants to admit how high the type I error rate is. Marketing Letters appears to be the most trustworthy of all marketing journals (are you listening FT50 and UTD Top 100?).
Registered reports at other journals. Psych & Marketing has also stated they will accept them and they have the structure in place to process them. I think they do but I’m not sure if any have been submitted. I also saw a tweet that Marketing Science was going through the review process with a registered report currently. This is great news because registered reports are a great way to reduce type I error rate which is what marketing is really struggling with currently.

Be the change
If you are interested in studying and publishing things that are true, I recommend you check out this list of preregistered marketing studies for ideas on how topics and types of studies where you can do replicable research. Note that there’s not a single priming paper on the list. If you are looking for a community to get involved with, I recommend SJDM. Also, join Twitter/Mastodon to get involved in the open science movement.
Aaron Charlton, PhD, MBA is a marketing professional who currently works in industry for Away Clinic and Away PR and lives in Mesa, Arizona, USA. He is formerly an academic and still takes interest in improving the quality of research in the field of marketing.
Leave a Reply