Google did some experiments on measurable ways to do interviews (puzzles, etc.) and found no effect on hire quality.
Unsurprising due to range restriction—by the time you’re interviewing with Google, you’ve gone through tons of filters (especially if you’re a Stanford grad). This is the same reason that when people look at samples of elite scientists, IQ tends to not be as important a factor as one would expect—because they’re all smart—and other things like personality factors start to correlate more.
I am just saying that for people who are capable of doing more than flipping burgers (which probably starts well before a single sigma out from the mean), we should just look at what they did.
This approach has the advantage of not counting highly the kind of people who may place well on tests, etc. due to good hardware, but who, due to poor habits or whatever other reason, end up not living up to their potential.
Similarly, this approach highlights that creative output is often not comparable. Is Van Gogh “better” than Shakespeare? A silly question.
I don’t disagree that IQ tests are useful for some things for folks within a sigma of the mean, and I also agree with the consensus that tests start to fail for smart folks, and we need better models then.
If the average IQ of LW is really around 140, then I think we should talk about the neat things we have done, and not the average IQ of LW. :)
Tests are often used to decide what to allow people to do, so you can’t rely on what they’ve done already. When testing applicants to college, they don’t often have a significant history of doing.
Unsurprising due to range restriction—by the time you’re interviewing with Google, you’ve gone through tons of filters (especially if you’re a Stanford grad). This is the same reason that when people look at samples of elite scientists, IQ tends to not be as important a factor as one would expect—because they’re all smart—and other things like personality factors start to correlate more.
EDIT: this may be related to Spearman’s law of diminishing returns
I am just saying that for people who are capable of doing more than flipping burgers (which probably starts well before a single sigma out from the mean), we should just look at what they did.
This approach has the advantage of not counting highly the kind of people who may place well on tests, etc. due to good hardware, but who, due to poor habits or whatever other reason, end up not living up to their potential.
Similarly, this approach highlights that creative output is often not comparable. Is Van Gogh “better” than Shakespeare? A silly question.
I don’t disagree that IQ tests are useful for some things for folks within a sigma of the mean, and I also agree with the consensus that tests start to fail for smart folks, and we need better models then.
If the average IQ of LW is really around 140, then I think we should talk about the neat things we have done, and not the average IQ of LW. :)
Tests are often used to decide what to allow people to do, so you can’t rely on what they’ve done already. When testing applicants to college, they don’t often have a significant history of doing.