It’s an interesting and very common form of an entirely irrational argument. (Hypothetical) absence of a better test in no way implies that it is a good enough test for a select purpose. Especially when no one really tried to quantify the error you might get.
I prefer quantitative arguments to qualitative arguments; relatedly, I prefer certainty as a number to certainty as a word. I think it’s better to make the most of mediocre data (and figure out which additional data is highest EV) than to throw out the best data available.
It is not true that people haven’t tried to quantify the error they might get; this is actually a major concern of psychometricians. They’ve figured out several ways that a test can go wrong, and have come up with quantitative measures on how much those seem to have happened. For example, a problem with WWI-era IQ tests was that the modal number of correct questions was 0, which suggests that a large number of test-takers did not understand the instructions, which dropped the uncorrected mean significantly. Now they look for this problem.
For example, here’s a paper about Raven’s in Africa, which goes through the various ways that Raven’s could underestimate African intelligence. It’s full of quantative statements like “the correlation with other intellectual tests is generally about .6 in Western studies, but is .33 in African studies, suggesting it is less g-loaded for Africans.”*
If you wanted, you could figure out what an individual Raven’s score of 80 implies for any other cognitive test in Westerners and Africans respectively. Like any Bayesian exercise, this relies pretty heavily on the priors you choose: if you assume the score is accurate but not precise, then you have a mean centered on 80 but a difference variance for the two groups, with a larger African variance because your test is less precise. If you assume both groups have the Western mean, then the regression to the mean (i.e. upwards) is higher for the African than the Westerner, again because the test was less precise.
*I should point out that there are other, competing interpretations of this finding, and it seems that the correlation is lower for the more rural and less educated, suggesting the left half of Fig 4 is due to culture. But from the studies on the right half of Fig 4, we would end up with an estimate for African intelligence given Western culture that’s about 80-85, which is a bit lower than African American intelligence.
I said, “doesn’t do much arithmetic”. You can look at the whites 1000 or 2000 years ago and vast majority don’t do much arithmetic. “Haven’t invented arithmetic” is your invention.
I was thinking of anumeric tribes, which are rare enough that we’re not quite sure whether or not they exist. But many tribes seem at least partially anumeric, and I would be surprised if that were not predictive of the mean IQ of people currently in the tribe (setting aside the question of ‘genetic IQ capability’).
That most Romans did not do much arithmetic over the course of their lives doesn’t say all that much about their ability to do arithmetic or their general intellectual capability; most modern Americans don’t do much arithmetic (and, actually, they probably do less because they have more machines to do it for them).
I prefer quantitative arguments to qualitative arguments; relatedly, I prefer certainty as a number to certainty as a word. I think it’s better to make the most of mediocre data (and figure out which additional data is highest EV) than to throw out the best data available.
It is not true that people haven’t tried to quantify the error they might get; this is actually a major concern of psychometricians. They’ve figured out several ways that a test can go wrong, and have come up with quantitative measures on how much those seem to have happened. For example, a problem with WWI-era IQ tests was that the modal number of correct questions was 0, which suggests that a large number of test-takers did not understand the instructions, which dropped the uncorrected mean significantly. Now they look for this problem.
For example, here’s a paper about Raven’s in Africa, which goes through the various ways that Raven’s could underestimate African intelligence. It’s full of quantative statements like “the correlation with other intellectual tests is generally about .6 in Western studies, but is .33 in African studies, suggesting it is less g-loaded for Africans.”*
If you wanted, you could figure out what an individual Raven’s score of 80 implies for any other cognitive test in Westerners and Africans respectively. Like any Bayesian exercise, this relies pretty heavily on the priors you choose: if you assume the score is accurate but not precise, then you have a mean centered on 80 but a difference variance for the two groups, with a larger African variance because your test is less precise. If you assume both groups have the Western mean, then the regression to the mean (i.e. upwards) is higher for the African than the Westerner, again because the test was less precise.
*I should point out that there are other, competing interpretations of this finding, and it seems that the correlation is lower for the more rural and less educated, suggesting the left half of Fig 4 is due to culture. But from the studies on the right half of Fig 4, we would end up with an estimate for African intelligence given Western culture that’s about 80-85, which is a bit lower than African American intelligence.
I was thinking of anumeric tribes, which are rare enough that we’re not quite sure whether or not they exist. But many tribes seem at least partially anumeric, and I would be surprised if that were not predictive of the mean IQ of people currently in the tribe (setting aside the question of ‘genetic IQ capability’).
That most Romans did not do much arithmetic over the course of their lives doesn’t say all that much about their ability to do arithmetic or their general intellectual capability; most modern Americans don’t do much arithmetic (and, actually, they probably do less because they have more machines to do it for them).