Most employers want a track of record of doing job X successfully when hiring people to do job X. If job X requires intelligence, then they will be indirectly selecting intelligent people … whilst filtering out “smart but doesn’t get things done” people. Seems sane to me.
Yes, of course. These particular traits you have deigned to consider for your worthy evaluation do seem, to me as well, perfectly sane.
I think you forgot to activate your Real World Logic coprocessor before replying, and I’m being sarcastic and offensive in this response.
In more serious words, these particular selected characteristics do not comprise the entirety of “the system” aforementioned. I’ve said that the system is /unlikely/ to be sane, as I do not have complete information on the entire logic and processes in it. I also think we’re working off of different definitions of “sane”—here, IIRC, I was using a technical version that could be better expressed as “close to perfectly rational, in the same way perfect logicians can be in theoretical formal logic puzzles”.
That leads to a much-noted chicken-and-egg problem… but that aside, for all but the most menial and interchangeable X, employers don’t generally have access to data about how well and how long prospective hires have done X. They have access to candidates’ word for how well they’ve done more or less imperfectly related work, and usually to recommendations from their former employers and coworkers—but the former is unreliable, and the latter demonstrates only that the candidate isn’t a complete schlub.
I haven’t read the paper in the ancestor, but it seems reasonable to me that IQ would often end up being a better predictor of performance, given these constraints.
No. But it is evidence for the other thing being better, when the constraints under question don’t apply to that other thing.
Of course, while we’re talking evidence, we shouldn’t neglect the fact that the traditional interview/resume method has reached fixation and doesn’t look to be in immediate danger of being displaced. But “current practice” doesn’t necessarily imply “optimal” or even “best known”, especially when psychometric methods are legally problematic.
Most employers want a track of record of doing job X successfully when hiring people to do job X. If job X requires intelligence, then they will be indirectly selecting intelligent people … whilst filtering out “smart but doesn’t get things done” people. Seems sane to me.
Yes, of course. These particular traits you have deigned to consider for your worthy evaluation do seem, to me as well, perfectly sane.
I think you forgot to activate your Real World Logic coprocessor before replying, and I’m being sarcastic and offensive in this response.
In more serious words, these particular selected characteristics do not comprise the entirety of “the system” aforementioned. I’ve said that the system is /unlikely/ to be sane, as I do not have complete information on the entire logic and processes in it. I also think we’re working off of different definitions of “sane”—here, IIRC, I was using a technical version that could be better expressed as “close to perfectly rational, in the same way perfect logicians can be in theoretical formal logic puzzles”.
Insane is not an obvious synonym for imperfect.
Opinions vary on the role of intelligence in the first place
That leads to a much-noted chicken-and-egg problem… but that aside, for all but the most menial and interchangeable X, employers don’t generally have access to data about how well and how long prospective hires have done X. They have access to candidates’ word for how well they’ve done more or less imperfectly related work, and usually to recommendations from their former employers and coworkers—but the former is unreliable, and the latter demonstrates only that the candidate isn’t a complete schlub.
I haven’t read the paper in the ancestor, but it seems reasonable to me that IQ would often end up being a better predictor of performance, given these constraints.
One thing being imperfect doesn’t make another thing better.
No. But it is evidence for the other thing being better, when the constraints under question don’t apply to that other thing.
Of course, while we’re talking evidence, we shouldn’t neglect the fact that the traditional interview/resume method has reached fixation and doesn’t look to be in immediate danger of being displaced. But “current practice” doesn’t necessarily imply “optimal” or even “best known”, especially when psychometric methods are legally problematic.