Yes, for sure. You can still fall for selective skepticism where you scrutinize studies you “like” much more than studies you don’t like. You can deal with that by systematically applying the same checklist to every study you read, but that might be time consuming. The real solution is probably a community that is versed in statistics and that have open debates on the quality of studies, perhaps cumulatively, biases will cancel each other if the community has enough diversity of thought. Hence the value of pluralism.
First off, I like the compilation you made and I’m tempted to memorize it despite all I’m saying.
This ‘pluralism’ solution does not feel meaty—your last sentence “Hence the value of pluralism” sounds to me like an applause light. I mean yeah, ultimately you and I build a lot of what we know on trust in the whole collective of scientists. But it’s not directly relevant/useful to say so; there should be a halfway good solution for yourself as a solo rationalist, and calibrating yourself against others’ beliefs is an extra measure you may apply later. Because I still prefer all those others to have used good solo toolkits for themselves: it makes them more reliable for me too.
Tentatively, for a real solution, I propose that it’s better to focus on what right statistics looks like so that wrong statistics will automatically generate a feeling of puzzlement, and this way you still anyways get the ability to compare the quality of two studies.
Or you could learn each type of misuse as part of thoroughly learning the concept where they apply, with focus on better understanding that concept, not on learning about the misuse.
Yes, for sure. You can still fall for selective skepticism where you scrutinize studies you “like” much more than studies you don’t like. You can deal with that by systematically applying the same checklist to every study you read, but that might be time consuming. The real solution is probably a community that is versed in statistics and that have open debates on the quality of studies, perhaps cumulatively, biases will cancel each other if the community has enough diversity of thought. Hence the value of pluralism.
First off, I like the compilation you made and I’m tempted to memorize it despite all I’m saying.
This ‘pluralism’ solution does not feel meaty—your last sentence “Hence the value of pluralism” sounds to me like an applause light. I mean yeah, ultimately you and I build a lot of what we know on trust in the whole collective of scientists. But it’s not directly relevant/useful to say so; there should be a halfway good solution for yourself as a solo rationalist, and calibrating yourself against others’ beliefs is an extra measure you may apply later. Because I still prefer all those others to have used good solo toolkits for themselves: it makes them more reliable for me too.
Tentatively, for a real solution, I propose that it’s better to focus on what right statistics looks like so that wrong statistics will automatically generate a feeling of puzzlement, and this way you still anyways get the ability to compare the quality of two studies.
Or you could learn each type of misuse as part of thoroughly learning the concept where they apply, with focus on better understanding that concept, not on learning about the misuse.