Replications in Marketing: Myth and Reality

Third-party replication is the foundation of good science. Without replication, how can we know if a scientific finding is even true? Moreover, if an entire field loses its sanity and commences in the wide-scale production of false knowledge, how would we know without replication? In order to help emphasize the usefulness of replications in marketing, I think it’s important to try to dispense with some faulty logic that has cropped up as a way to justify poor outcomes that have arisen from bad practices.

Myth: Many marketing findings have successfully replicated.

Reality: No, they have not. As of this writing only 3 out of 34 (8.8%) of all high-powered, preregistered direct replications have been unambiguously successful. There have been numerous conceptual replications that have been deemed successful. A conceptual replication is one in which context and approach deviate substantially from the original study. In fact, one could argue that virtually all Consumer Behavior articles include some flavor of successful conceptual replication. But there are two problems: (1) conceptual replications, by design, merely test whether a study generalizes to a new context; failed replications provide little information about the trustworthiness of the original finding, and (2) if the original study was bad, a p-hacked conceptual replication can wrongly confirm its truthfulness. In the case of direct (AKA “exact as possible”) replication, p-hacking is far less likely to occur because one is attempting to follow the protocol in the published study as closely as possible. Thus, only direct replications show true replicability.

Myth: We shouldn’t expect marketing studies to replicate because of the continuously changing nature of consumption practices.

Reality: First, this argument completely undermines any claims of generalizability made in the original studies. It also makes one wonder “what then is the point of publishing the paper in the first place if it was only valid for Harvard undergrads in 1998?” Second, the Data Colada replication project addressed this claim directly by attempting to replicate 10 recently published marketing studies at Journal of Consumer Research and Journal of Marketing Research. What were the results? One out of ten was unambiguously successful. Third, several recent bias studies have shown that a pattern of “just barely significant” results in marketing indicate low evidential quality of the original studies.

Myth: These failed replications are a symptom of problems in Consumer Behavior but do not reflect on other categories of marketing research.

Reality: This is unknown because only replications of Consumer Behavior have been documented. The root of the replication problem appears to be bias in the original studies (p-hacking, low power, publication bias). This is basically a form of overfitting a model to the data. There is no reason to believe this problem does not similarly impact other subfields that use statistical inference in their research. Here is an easy test to see whether our econometrician friends’ stock market models are replicable: Watch to see if they start showing up to the business school in Ferraris and helicopters–or don’t show up at all.

Myth: Failed replications don’t tell us why the original study failed to replicate.

Reality: Diligent replicators work hard to identify problems with the original studies. First, bias detection tools can be used to identify the presence of p-hacking in the original study. Second, the Data Colada project identified two studies with unreported confounds. Lastly, I’ve been working on a data anomalies case where a failed replication revealed six different categories of numerical inconsistency in the original paper (not public). The reality is that replications give us incredibly useful information and they allow us to build a cumulative science.

Myth: Replications don’t teach us anything new

Reality: Original studies don’t teach us anything new. They could be p-hacked, they could be faked. We don’t really know anything until after it has been replicated by a neutral third-party.

Conclusions

Marketing researchers who are interested in improving scientific quality should be proponents of replications. Gatekeepers in our field should allow for replications at all levels of journals. They should not determine the publishability of the replication based on whether it succeeded or failed. Registered Reports provide a good model to carry out an outcome-agnostic approach to replication.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *