No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
I’m not convinced about this, both from an efficiency perspective and an accuracy perspective.
Take military service as an example. The way I remember it, we had like 60 people all take an IQ test in parallel, which seems more efficient than having 60 different interviews. (Somewhere between 2x more efficient and 60x more efficient, depending on how highly one weights the testers and testee’s time.) Or in the case of genomics, you are often dealing with big databases of people who signed up a long time ago for medical research; it’s not so practical to interview them extensively, and existing studies deal with brief tests that were given with minimal supervision.
From an accuracy perspective, my understanding is that the usual finding is that psychometric tests and structured interviews provide relatively independent information, such that the best accuracy is obtained by combining both. This does definitely imply that there would be value in integrating more structured interviews into genomics (if anyone can afford it...), and more generally integrating information from more different angles into single datasets, but it doesn’t invalidate those tests in the first place.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
I mean there’s a few different obvious angles to this.
IQ tests measure g. If you’ve realized someone has some non-g factor that is very important for your purposes, then by all means conclude that the IQ test missed that.
If you’ve concluded that the IQ test underestimated their g, that’s a different issue. You’re phrasing things like your own assessment correlates at like 0.9 with g and that the residual is non-normally distributed, which I sort of doubt is true (should be easy enough to test… though maybe you already have experiences you can reference to illustrate it?)
I’m not convinced about this, both from an efficiency perspective and an accuracy perspective.
Take military service as an example. The way I remember it, we had like 60 people all take an IQ test in parallel, which seems more efficient than having 60 different interviews. (Somewhere between 2x more efficient and 60x more efficient, depending on how highly one weights the testers and testee’s time.) Or in the case of genomics, you are often dealing with big databases of people who signed up a long time ago for medical research; it’s not so practical to interview them extensively, and existing studies deal with brief tests that were given with minimal supervision.
From an accuracy perspective, my understanding is that the usual finding is that psychometric tests and structured interviews provide relatively independent information, such that the best accuracy is obtained by combining both. This does definitely imply that there would be value in integrating more structured interviews into genomics (if anyone can afford it...), and more generally integrating information from more different angles into single datasets, but it doesn’t invalidate those tests in the first place.
I mean there’s a few different obvious angles to this.
IQ tests measure g. If you’ve realized someone has some non-g factor that is very important for your purposes, then by all means conclude that the IQ test missed that.
If you’ve concluded that the IQ test underestimated their g, that’s a different issue. You’re phrasing things like your own assessment correlates at like 0.9 with g and that the residual is non-normally distributed, which I sort of doubt is true (should be easy enough to test… though maybe you already have experiences you can reference to illustrate it?)