ÂÜÀòÉç¹ÙÍø

Stat Tests Harm Decisions

Introduction

Scott Armstrong presents examples where significance tests misled non-marketing decision makers, and seeks marketing examples

 : : : Posting

: : dialog


A section of the latest International Journal of Forecasting (vol. 23, 2007) contains my paper, "Significance tests harm progress in forecasting," along with commentaries. In response, Ezra Hauer sent me his paper, "The harm done by tests of significance," Accident Analysis and Prevention, 36 (2004), 495-500. He has been working on this problem since 1983. In his 2004 paper, he provides three examples where the researchers were misled by their tests of significance, and thus made incorrect recommendations with respect to right-turn-on-red, automobile speed limits, and paved shoulders on two-lane roads. In other words, given a summary of the data, people would have been led to the correct decision. But given the data summary and the significance tests, they made incorrect decisions.

I expect that people in marketing could provide additional examples of how tests of statistical significance led to poor decision-making, and ELMAR might be a good place to record these failures.

In would be interesting if researchers could also provide examples of cases where people provided with summaries of the data and statistical significance tests were able to make better decisions than those who received only the summaries. To date, I have been unable to find such an example. Once we find some examples, we could start to address the question of whether there are any conditions that would justify the use of statistical significance.

My paper is available in full text at


J. Scott Armstrong
Professor of Marketing, 747 Huntsman, The Wharton School, U. of PA,
Phila, PA 19104

home phone 610 622 6480
Home address: 645 Harper Ave., Drexel Hill, PA 19026
Fax at school: 215 898 2534