ÂÜÀòÉç¹ÙÍø

Replications Reply

Introduction

Further comments on this thread by Heiner Evanschitzky

 : : : Posting

: : dialog


Response Heiner Evanschitzky

With respect to the comment by Tim Ambler:

As an author of the paper “Replication Research’s Disturbing Trend” (available in full-text at ), I am pleased that our work is stimulating some debate among colleagues.

I agree with Tim’s statement that there is an “editor bias” against replication research. Our results – 36 published replications in JM, JMR, and JCR between 1974 and 2004 – as well as anecdotal evidence from my own (sometimes painful) experience in trying to publish replications strongly support Tim’s observation. While few disagree about the importance of replication research for the advancement of science, reality looks different!

Why are so few replication attempts published? At first, we have to admit that not all published work is worth replicating. (Scott Armstrong has published empirical estimates of papers in forecasting and marketing that have that have any possible value, and in neither estimate exceeded 3%*). Without musing about rigor vs. relevance, having one’s study replicated might be an excellent indicator of the value of the research. Studies that have been replicated might be rewarded in some way. Who knows, maybe we should think about a “replication index,” similar to the citation index as an additional indicator for the impact a study has.

I support Tim’s call to editors and the ÂÜÀòÉç¹ÙÍø to grant publication space to replication research and to identify studies worth replicating. By so doing, we can strengthen the foundations of our discipline.

* J. Scott Armstrong (2004), "", Commentary on Mort, et al., "Perceptions of Marketing Journals by Senior Academics in Australia and New Zealand, Pp. 51-61

Heiner