I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find.
That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
Your example does not address this case—it’s about different tests producing different results.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find. That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.