Putting aside the controversy over misinterpreted p-values, I think it’s worth at least thinking about how the corporate market research world thinks about the use of statistical significance in general.
Most of us took a stats class in college and for some reason or another the 95% confidence level was the first CI we were presented with. I distinctly remember my econometrics professor telling us that the 95% level was completely arbitrary, and later on we would learn that statistical significance does not imply economic significance. And more confusing yet, that something could be economically significant without being statistically significant.
Basically, I’ve learned to stop paying attention to p-values in my world.
So, what happens when a client is testing two ads for a media buy?
For simplicity, say wee are testing which ad — A or B — better incites respondents to purchase.
We set up a null hypothesis:
H0: P(A) = P(B), where P() is a function whose output is the proportion of respondents planning to purchase the product.
Ha: P(A) > P(B) or P(A) < P(B)
We collect the data, calculate the p-value to be something less than p>0.05.
Ok, great. So now what? The two ads are equal in their ability to incite potential customers? Is that really useful?
At the end of the day, all we can do is compare point estimates. If P(A) = 25% and P(B) = 24%, then, well, quite frankly, it doesn’t really matter what your p-value is. As the research provider, you either have to say:
Your p-value is low, but we still think you should go with Ad A.
Your p-value is low, so you need to do more research.
The latter kind of misses the point. It shows a lack of sympathy for the decision making process. A lack of awareness over the cost of research. Maybe p<0.2 was good enough considering how much additional certainty would cost.
Either way, the case against a 95% Confidence Level has as little to do with statistics and much more to do with researchers not hiding behind an academic threshold, inappropriate for corporate research.