Okay so we are starting to learn that there could be some issues with the way that we are doing research currently. That’s okay. It’s part of the process. We can take a hard look, make changes, and come out better for it. Specifically, it looks like there may be a lot of noise mining happening in marketing journals. Here is my oversimplified explanation of noise mining: Instead of using inferential statistics to test our ideas, which is good, we’re using these tools to find ideas, which is usually bad.
My personal journey into meta-science
I first became aware of the extent of the noise mining problem when I started doing a meta-analysis of mediation tests in marketing and found that nearly all of the test CIs were too close to zero to be trusted. I was shocked! I knew there were a lot of problems with confounds and such but I had no idea that so many of the effects could be just random noise. This was right before the pandemic hit in 2020. We didn’t post our preprint until near the end of 2021 though . In our preprint we leave the door open for the possibility that many of the tests are true results that are simply underpowered, but noise mining is the most parsimonious explanation for the phenomenon.
After making this discovery I decided to look at my own unpublished papers to see if the effects were real or simply artifacts of noise mining. I used a tool called p-curve  to do that. You can do it yourself if you are open-minded about the possibility that you may be on the wrong track. Basically, you just go through your papers and find the one or two statistical tests from each study that you would include in a highly condensed version of the paper such as a conference submission. For each paper you will have a set of 5 or so p-values. Now look at the distribution of them. How many of the p-values are higher than .01? If they all are (as was the case with my papers) that means your data is unreliable. Honestly, it would be best if most are less than .001. I ended up shelving all of the empirical papers I came out of my PhD program with. Needless to say my tenure and promotion committee wasn’t enthused about that but I felt really good about it. I was a new man after that with a new spring in my step. I instead focused on a conceptual paper that was part of my dissertation and a few meta-analysis projects.
2020 went out with a bang
At the end of 2020, there was an epic debate over preregistration published in JCP. Many were surprised by the arguments of Pham and Oh (2021) that consumer psychology “should be regarded as exploratory.” They made the case that the models we use are too complex to be predicted a priori (as required by the basic logic of inferential statistics). They argued against any requirement that authors should show evidence that they preplanned their analyses. The editor chimed in that it was unfair to require preregistration. Simmons et al. (2021) agreed that it shouldn’t be a universal requirement and presented two excellent papers arguing for the benefits of preregistration .
Many more revelations came in 2021.
- In February, Uli Schimmack posted evidence that p-values in marketing are generally too close to p=05, indicating low evidence quality, relative to every single field of psychology.  He also posted evidence that Michel Pham (of Pham and Oh 2021 fame) made a career of shoddy research practices .
- Also in February, van Osselaer and Janiszewski published their finding based on survey data that most consumer research is not fully preplanned, meaning it is exploratory in nature but reported as confirmatory. 
- In April, in one of the biggest shocks of my career to date, Leif Nelson presented the results of two mass replication efforts centered on marketing studies. One was a cornucopia of recent JMR and JCR papers and the other was focused on resource scarcity. Very few replicated . Up to that point, few had ever tried to run preregistered direct replications on marketing studies. You can see an up-to-date listing of all direct replications in marketing on my replication tracker .
- In late October, I presented our finding that the bulk of mediation analysis CIs in marketing are too close to zero to be trusted . For me this was a very disturbing finding because understanding the process is so foundational to what we do. Definitely a big wakeup call!
- In November, Antonia Krefeld-Schwalb posted a preprint showing that with online samples, sample sizes have increased but effect sizes have gotten smaller . This is bad news for statistical power which is necessary to be sure we are reporting true results.
Metaphor priming and other “one weird trick” papers under fire
- A paper claiming that signing at the top (vs signing at the bottom) substantially increases honest reporting was retracted due to the discovery of fake data . Hilariously, the HBR summary paper of this research, with “One Weird Trick” in the actual title has not been retracted.
- The finding that people pay substantially more at auctions when the room temperature is hotter received an expression of concern due to four categories of data errors .
- The finding that people crave light-producing products such as flashlights when primed to recall unethical past deeds received an expression of concern due to impossible means (the original data were not available) .
- The idea that bitter flavors can affect peoples’ judgments of fairness, and that these effects could differ by culture, was retracted due to data anomalies .
New methodological discoveries in 2021
- Quentin André shows why it’s bad to remove outliers for individual conditions (removal should be done without respect to condition) .
- Uri Simonsohn shows the problem with testing interaction effects when you have problems of nonlinearity in your data (by the way they’re nearly all nonlinear due to floor and ceiling effects 🤢) . He shows alternative methods that reduce type I error and is making an accompanying R package.
Things to look forward to in 2022
Hopefully we will finally get to see the results of the SCORE Project, a systematic look at both analysis reproducibility and study replicability of findings across fields of science . They’ve run a lot of marketing replications and I have not seen any of the results. I’m really looking forward to it! There will also be a lot of other replications carried out that I am aware of. It will be interesting to see what kinds of effects do actually replicate.
My opinion about improving marketing research in 2022
It seems like people are interested in studying a lot of things that are not well-suited to the use of either inferential statistics or reductionist approaches in general. Here are two things I would suggest: First, look at the list I maintain of preregistered marketing studies that show strong evidential value (low p-values/not p-hacked) . This will help people know what our tools can be used for. I would love it if students started reading these papers for grad seminars. Second, maybe we should do less statistical inference. Why don’t we do more case studies? Personally, I would much rather read a researcher’s observations of marketing happening in real-world contexts than a hypothesis-testing paper derived by noise mining. Maybe it’s time to resurrect the case study in the field of marketing. Another thing to consider is that we could start an open science-friendly journal in the field of marketing. This would give an outlet for those who are interested in doing more honest research and it would put pressure on the big journals to reform. Meta-Psychology would be an excellent model for that . If anybody is interested in starting something like that, I would love to help out.
Happy New Year! Here’s to a great 2022!
- Preprint: Charlton, A., Montoya, A. K., Price, J., & Hilgard, J. (2021, October 29). Noise in the process: an assessment of the evidential value of mediation effects in marketing journals. https://doi.org/10.31234/osf.io/ck2r5
- Janiszewski, C., & van Osselaer, S. M. (2021). The benefits of candidly reporting consumer research. Journal of Consumer Psychology, 31(4), 633-646.
- Krefeld-Schwalb, A., & Scheibehenne, B. (2020, June 4). Tighter nets for smaller fishes: Mapping the development of statistical practices in consumer research between 2011 and 2018. https://doi.org/10.31234/osf.io/xjgyk
- André, Q. (2021). Outlier exclusion procedures must be blind to the researcher’s hypothesis. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001069