Since IQ means intelligence -quotient-, you always compare a score to other scores, so it’s not an absolute measure by definition—there is no absolute IQ test. I’m also not aware of any respectable existing test for absolute intelligence either,
Right. Unfortunately, whenever someone wants to talk about absolute intelligence, “IQ” is the closest word/concept to that.
When you look at adult IQ tests, the raw score is decent measure of ‘absolute intelligence’ for most modern humans. Current tests have known problems with exceptional individuals (on either end) and some tests are more interested in determining the shape of someone’s intelligence (like, say, the subtests on the Woodcock Johnson) than others (like the Raven’s test, which only tests one thing). Comparing raw scores tells you useful things- about the effects of age, about the Flynn effect, about theoretical populations, and even about the distribution now. IQ scores are defined to follow a bell curve, but if the raw scores don’t follow a bell curve, that’s important to know!
The concept of IQ as a quotient seems rooted in the history of testing children- “this 12 year old has a 16 year old’s development”- which isn’t very useful for adults. If we give a test for adults to Alice and Betty, and Alice has an IQ of 140 and Betty has an IQ of 100, that doesn’t mean Alice is 40% smarter than Betty; it means that Betty is 50th percentile and Alice is 99.6th percentile. But, in practice, we might want to know that it takes Betty 90 seconds to get a problem right 80% of the time, and it takes Alice 5 seconds to get it right 100% of the time, which is data we collected in order to get the official outputs of 140 and 100.
If we picture the concept of absolute intelligence as some kind of optimal information process with certain well defined characteristics whose lower and upper bounds are only determined by the laws of physics, I’m afraid human intelligence will be hardly comparable to it in any really meaningful way.
The Sentience Quotient is the closest thing I can think of, and it’s mostly good for describing why humans and trees have few productive conversations (though the upper bound is also interesting).
Right. Unfortunately, whenever someone wants to talk about absolute intelligence, “IQ” is the closest word/concept to that.
When you look at adult IQ tests, the raw score is decent measure of ‘absolute intelligence’ for most modern humans. Current tests have known problems with exceptional individuals (on either end) and some tests are more interested in determining the shape of someone’s intelligence (like, say, the subtests on the Woodcock Johnson) than others (like the Raven’s test, which only tests one thing). Comparing raw scores tells you useful things- about the effects of age, about the Flynn effect, about theoretical populations, and even about the distribution now. IQ scores are defined to follow a bell curve, but if the raw scores don’t follow a bell curve, that’s important to know!
The concept of IQ as a quotient seems rooted in the history of testing children- “this 12 year old has a 16 year old’s development”- which isn’t very useful for adults. If we give a test for adults to Alice and Betty, and Alice has an IQ of 140 and Betty has an IQ of 100, that doesn’t mean Alice is 40% smarter than Betty; it means that Betty is 50th percentile and Alice is 99.6th percentile. But, in practice, we might want to know that it takes Betty 90 seconds to get a problem right 80% of the time, and it takes Alice 5 seconds to get it right 100% of the time, which is data we collected in order to get the official outputs of 140 and 100.
The Sentience Quotient is the closest thing I can think of, and it’s mostly good for describing why humans and trees have few productive conversations (though the upper bound is also interesting).