The way traditional rationalists without special training relate to scientific findings is usually by uncritically accepting them as authoritative. One can become less wrong by learning that scientists are not close to perfect. They make mistakes and sometimes deceive themselves and others. Probably the single most common way this happens is through statistical malpractice. This post, in excellent detail and language non-experts can comprehend, explains one such case and identifies a general type of statistical screw-up: using the wrong tool.
Who can I trust, and why?
Trust no one. Learn a little math.
You don’t need to be able to solve the problems on your own, just enough to understand the arguments. I’m not a math guy either but only some of the statistical stuff is totally out of my grasp. Did you understand the math in this post?
The way traditional rationalists without special training relate to scientific findings is usually by accepting them as authoritative. One can become less wrong by learning that scientists are not close to perfect.
“Trust experts except when you don’t”?
Trust no one. Learn a little math.
“Don’t trust experts; become one yourself”? Wouldn’t that put me in the category of people-not-to-be-trusted? Isn’t that what Phil is pointing out, that most people don’t understand statistics? Why would I expect myself to be better at judging these kinds of problems than experts who spend their lives on it? Should I not expect myself to be just as bad at it, and potentially much worse (know enough to be dangerous)?
Did you understand the math in this post?
Yes. But it seems fundamental enough that experts should have caught it, therefore I am skeptical.
Scientists can be wrong. Certain kinds of science are more likely to involve screw-ups. Learn to identify these kinds of findings and learn to identify sources of screw-ups so you don’t fall for them.
“Don’t trust experts; become one yourself”?
If two experts disagree about something and you want to evaluate the disagreement one way is to understand their arguments. Sometimes you can look into both sides and discover that one of them isn’t really the expert you thought they were. You can evaluate the arguments or evaluate the expertise. I can’t think of anything else.
Why would I expect myself to be better at judging these kinds of problems than experts who spend their lives on it? Should I not expect myself to be just as bad at it, and potentially much worse (know enough to be dangerous)?
I assume you’re not planning on trying to publish statistical analyses so I doubt you’re dangerous.
You can probably learn more about statistics than at least some of the shoddy scientists out there. If you find yourself disagreeing about stats with a prominent statistician then, yeah, you’re probably wrong.
You aren’t learning how to run different kinds of statistical analyses. You’re learning about statistical errors scientists make. It’s a different set of knowledge which means you can know less about statistics in certain ways but still be able to point out where scientists go wrong.
The way traditional rationalists without special training relate to scientific findings is usually by uncritically accepting them as authoritative. One can become less wrong by learning that scientists are not close to perfect. They make mistakes and sometimes deceive themselves and others. Probably the single most common way this happens is through statistical malpractice. This post, in excellent detail and language non-experts can comprehend, explains one such case and identifies a general type of statistical screw-up: using the wrong tool.
Trust no one. Learn a little math.
You don’t need to be able to solve the problems on your own, just enough to understand the arguments. I’m not a math guy either but only some of the statistical stuff is totally out of my grasp. Did you understand the math in this post?
“Trust experts except when you don’t”?
“Don’t trust experts; become one yourself”? Wouldn’t that put me in the category of people-not-to-be-trusted? Isn’t that what Phil is pointing out, that most people don’t understand statistics? Why would I expect myself to be better at judging these kinds of problems than experts who spend their lives on it? Should I not expect myself to be just as bad at it, and potentially much worse (know enough to be dangerous)?
Yes. But it seems fundamental enough that experts should have caught it, therefore I am skeptical.
Some questions (this is an obviously incomplete* list, of course) to ask when you are in this situation:
Is the source pointing out the error reliable?
Does the criticized work acknowledge or otherwise address the claim?
Does the criticized work contain other flaws? (Subcategory: is the criticized work sloppy or lazy in execution?)
In this particular case, the answer to the third question appears to be “yes”. This is probably good reason to raise your probability that this particular criticism is correct.
* Bear in mind, of course, Eliezer Yudkowsky’s warning: If you want to shoot your foot off, it is never the least bit difficult to do so.
Thank you. These steps for analysis are very useful to me, and I feel they answer my original questions.
Scientists can be wrong. Certain kinds of science are more likely to involve screw-ups. Learn to identify these kinds of findings and learn to identify sources of screw-ups so you don’t fall for them.
If two experts disagree about something and you want to evaluate the disagreement one way is to understand their arguments. Sometimes you can look into both sides and discover that one of them isn’t really the expert you thought they were. You can evaluate the arguments or evaluate the expertise. I can’t think of anything else.
I assume you’re not planning on trying to publish statistical analyses so I doubt you’re dangerous.
You can probably learn more about statistics than at least some of the shoddy scientists out there. If you find yourself disagreeing about stats with a prominent statistician then, yeah, you’re probably wrong.
You aren’t learning how to run different kinds of statistical analyses. You’re learning about statistical errors scientists make. It’s a different set of knowledge which means you can know less about statistics in certain ways but still be able to point out where scientists go wrong.