This is correct as an analysis of racial politics, but you end up with latent variable measurement projects for multiple reasons. In the particular case of cognitive abilities, there’s also cognitive disability, military service, hiring, giftedness, cognitive decline, genomics and epidemiology, all of which have interest in the measurement of cognitive abilities. Furthermore, the theory and tools for cognitive abilities can be informed by and is informative for the measurement of other latent variables, so you also end up with interest from people who study e.g. personality.
People should do serious research to inform their policy, and they should do serious research on latent variables, but they should avoid using disingenuous arguments where they talk like they’ve done serious research when really they’re trying to troll.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.
This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.
Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
I’m not convinced about this, both from an efficiency perspective and an accuracy perspective.
Take military service as an example. The way I remember it, we had like 60 people all take an IQ test in parallel, which seems more efficient than having 60 different interviews. (Somewhere between 2x more efficient and 60x more efficient, depending on how highly one weights the testers and testee’s time.) Or in the case of genomics, you are often dealing with big databases of people who signed up a long time ago for medical research; it’s not so practical to interview them extensively, and existing studies deal with brief tests that were given with minimal supervision.
From an accuracy perspective, my understanding is that the usual finding is that psychometric tests and structured interviews provide relatively independent information, such that the best accuracy is obtained by combining both. This does definitely imply that there would be value in integrating more structured interviews into genomics (if anyone can afford it...), and more generally integrating information from more different angles into single datasets, but it doesn’t invalidate those tests in the first place.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
I mean there’s a few different obvious angles to this.
IQ tests measure g. If you’ve realized someone has some non-g factor that is very important for your purposes, then by all means conclude that the IQ test missed that.
If you’ve concluded that the IQ test underestimated their g, that’s a different issue. You’re phrasing things like your own assessment correlates at like 0.9 with g and that the residual is non-normally distributed, which I sort of doubt is true (should be easy enough to test… though maybe you already have experiences you can reference to illustrate it?)
This is correct as an analysis of racial politics, but you end up with latent variable measurement projects for multiple reasons. In the particular case of cognitive abilities, there’s also cognitive disability, military service, hiring, giftedness, cognitive decline, genomics and epidemiology, all of which have interest in the measurement of cognitive abilities. Furthermore, the theory and tools for cognitive abilities can be informed by and is informative for the measurement of other latent variables, so you also end up with interest from people who study e.g. personality.
People should do serious research to inform their policy, and they should do serious research on latent variables, but they should avoid using disingenuous arguments where they talk like they’ve done serious research when really they’re trying to troll.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.
This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.
Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.
I’m not convinced about this, both from an efficiency perspective and an accuracy perspective.
Take military service as an example. The way I remember it, we had like 60 people all take an IQ test in parallel, which seems more efficient than having 60 different interviews. (Somewhere between 2x more efficient and 60x more efficient, depending on how highly one weights the testers and testee’s time.) Or in the case of genomics, you are often dealing with big databases of people who signed up a long time ago for medical research; it’s not so practical to interview them extensively, and existing studies deal with brief tests that were given with minimal supervision.
From an accuracy perspective, my understanding is that the usual finding is that psychometric tests and structured interviews provide relatively independent information, such that the best accuracy is obtained by combining both. This does definitely imply that there would be value in integrating more structured interviews into genomics (if anyone can afford it...), and more generally integrating information from more different angles into single datasets, but it doesn’t invalidate those tests in the first place.
I mean there’s a few different obvious angles to this.
IQ tests measure g. If you’ve realized someone has some non-g factor that is very important for your purposes, then by all means conclude that the IQ test missed that.
If you’ve concluded that the IQ test underestimated their g, that’s a different issue. You’re phrasing things like your own assessment correlates at like 0.9 with g and that the residual is non-normally distributed, which I sort of doubt is true (should be easy enough to test… though maybe you already have experiences you can reference to illustrate it?)