Statistical findings are worse than useless. They give the illusion of knowledge. Even when they’re true for a population, they’re false when applied to any given person. To rely on statistics as a way of understanding how people work is to take up superstition in the name of science.
Actually, marketers are well aware that statistics don’t tell them HOW people work; they only tell them what gets the most response. Knowing that a “Johnson box” imrpoves results on mailing A to list B only suggests that it might work with mailing C to list D; it does not tell you how or why it worked, nor does it give you any real way to explain the result when it doesn’t work.
Most marketing is useful folklore; good theories in marketing are very few and far between. Even the best teachers of marketing rarely rise to strong theories; the ones that do are mostly either borrowing their models from NLP and hypnosis, or reinventing them.
The quote is not about whether statistics might tell you something useful about people in general, it’s about understanding HOW a specific individual is doing what they’re doing. A statistical model can suggest paths to investigate, but it can’t tell you what’s actually going on in a specific case.
IOW, it’s an epic FAIL for a mechanic to “rely on statistics as a way of understanding how” your car works (or doesn’t) instead of actually observing what a specific car is (or is not) doing.
A good mechanic will often use the following reason: 90% of cars with symptom x problem Y, so that is what I will check first.
Then, he will discover whether Y is actually the problem (ETA: for this particular car), and if not, discard that hypothesis and look for something else. This essential step is missing from all papers in psychology reporting statistical results. The fault is even worse when those results are reported in terms such as (to take a recent example) “willpower is a scarce resource”.
I’d say “useful folklore” is by definition better than useless.
The original quote was a scientist talking about finding deep theories of how people work. You can statistically validate such a theory, but the statistics themselves do not tell you anything about how something works.
More specific example: the recent link about how expressing your goals leads to them failing in a certain number of cases. This is a nice statistic to quote, but it doesn’t really say why, despite the attached theorizing about social energies and so forth. In my comment on that post, I mentioned several mechanisms I’ve observed for how a public commitment can lead to failure, and NONE of them were the social mechanism posited in the original article. (Which isn’t to say I haven’t also seen that mechanism at work.)
The point is that without a good idea of what to look for, vaguely obtained statistics are not very useful. You can potentially validate a good model with statistics, but by their very nature, statistics are a measurement of what you don’t know.
If X% of people fail when they make a public commitment, what does that tell us about the other 100-X%? What about those same people under different circumstances? Such statistics say nothing about HOW the failure or success actually occurs, which is the one thing we most want to know in the scientific/epistemic context—a true model of behavior.
In contrast, marketing, pickup, and self-help are instrumental fields, where not having a “true” model is not necessarily a problem. But the quote is from a scientist, talking about scientific usefulness.
-- William T. Powers
Tell that to a marketing agency.
Actually, marketers are well aware that statistics don’t tell them HOW people work; they only tell them what gets the most response. Knowing that a “Johnson box” imrpoves results on mailing A to list B only suggests that it might work with mailing C to list D; it does not tell you how or why it worked, nor does it give you any real way to explain the result when it doesn’t work.
Most marketing is useful folklore; good theories in marketing are very few and far between. Even the best teachers of marketing rarely rise to strong theories; the ones that do are mostly either borrowing their models from NLP and hypnosis, or reinventing them.
The quote is not about whether statistics might tell you something useful about people in general, it’s about understanding HOW a specific individual is doing what they’re doing. A statistical model can suggest paths to investigate, but it can’t tell you what’s actually going on in a specific case.
IOW, it’s an epic FAIL for a mechanic to “rely on statistics as a way of understanding how” your car works (or doesn’t) instead of actually observing what a specific car is (or is not) doing.
A good mechanic will often use the following reason: 90% of cars with symptom x problem Y, so that is what I will check first.
Then, he will discover whether Y is actually the problem (ETA: for this particular car), and if not, discard that hypothesis and look for something else. This essential step is missing from all papers in psychology reporting statistical results. The fault is even worse when those results are reported in terms such as (to take a recent example) “willpower is a scarce resource”.
I can’t square
with
I’d say “useful folklore” is by definition better than useless.
The original quote was a scientist talking about finding deep theories of how people work. You can statistically validate such a theory, but the statistics themselves do not tell you anything about how something works.
More specific example: the recent link about how expressing your goals leads to them failing in a certain number of cases. This is a nice statistic to quote, but it doesn’t really say why, despite the attached theorizing about social energies and so forth. In my comment on that post, I mentioned several mechanisms I’ve observed for how a public commitment can lead to failure, and NONE of them were the social mechanism posited in the original article. (Which isn’t to say I haven’t also seen that mechanism at work.)
The point is that without a good idea of what to look for, vaguely obtained statistics are not very useful. You can potentially validate a good model with statistics, but by their very nature, statistics are a measurement of what you don’t know.
If X% of people fail when they make a public commitment, what does that tell us about the other 100-X%? What about those same people under different circumstances? Such statistics say nothing about HOW the failure or success actually occurs, which is the one thing we most want to know in the scientific/epistemic context—a true model of behavior.
In contrast, marketing, pickup, and self-help are instrumental fields, where not having a “true” model is not necessarily a problem. But the quote is from a scientist, talking about scientific usefulness.