So, it’s possible that a raw score one year will mean a different thing another year. For the SAT and GRE, getting one question wrong on the math section will drop you tens of points- but how many varies from year to year. (Other scores are more stable; that one is corrupted by edge effects of the tremendous number of people who get all the quantitative questions correct.)
The point I was making is that, when IQ is calculated by age group, that’s evidence that there are raw score differentials between age groups. This paper shows a theoretical graph of what that would like in Figure 1. Also related is Figure 3, but it has a crazy axis and so I’m hesitant to apply it. (I’m having trouble finding actual raw score data out there.)
If age-related decline and death are unrelated to intelligence, then even though raw scores will decline with age, individual IQ will stay the same in expectation (beyond unavoidable random drift) because each person is compared to people whose scores have declined about as much as theirs.
When IQ is used as a measure of “where are you relative to your peers?”, you want this. When IQ is used as a measure of absolute intelligence, you don’t want this. This email by Eliezer comes to mind.
“The point I was making is that, when IQ is calculated by age group, that’s evidence that there are raw score differentials between age groups.”
Exactly, that is the point. Of course there is a certain age-related deterioration of intelligence, especially fluid intelligence. So even if he did the exact same test he already did decades ago, his raw score will surely be lower now than it was back then. Confusingly enough, he could still be said to be as “intelligent” as he was back then if his relative position within the IQ distribution hadn’t changed. (Which if we were to believe his recent IQ-test, actually happened).
If any of this is confusing it’s because IQ is a relative measurement. So if I were to say that he is as intelligent as he was decades ago in the context of an IQ test, that doesn’t mean that he would solve the same proportion of tasks correctly, or that there wasn’t any cognitive decline due to aging, but only that his relative position within the normal distribution of IQ scores hasn’t changed.
IQ tests never measure absolute intelligence. Since IQ means intelligence -quotient-, you always compare a score to other scores, so it’s not an absolute measure by definition—there is no absolute IQ test. I’m also not aware of any respectable existing test for absolute intelligence either, nor how exactly one might even look like, although I’m sure you could in principle construct one if you define the word intelligence in nonconfused terms that reflect actual reality, which seems like a monumental task.
If we picture the concept of absolute intelligence as some kind of optimal information process with certain well defined characteristics whose lower and upper bounds are only determined by the laws of physics, I’m afraid human intelligence will be hardly comparable to it in any really meaningful way. And more importantly, how could you even begin to make a reliable and valid measure of something like that in humans?
Since IQ means intelligence -quotient-, you always compare a score to other scores, so it’s not an absolute measure by definition—there is no absolute IQ test. I’m also not aware of any respectable existing test for absolute intelligence either,
Right. Unfortunately, whenever someone wants to talk about absolute intelligence, “IQ” is the closest word/concept to that.
When you look at adult IQ tests, the raw score is decent measure of ‘absolute intelligence’ for most modern humans. Current tests have known problems with exceptional individuals (on either end) and some tests are more interested in determining the shape of someone’s intelligence (like, say, the subtests on the Woodcock Johnson) than others (like the Raven’s test, which only tests one thing). Comparing raw scores tells you useful things- about the effects of age, about the Flynn effect, about theoretical populations, and even about the distribution now. IQ scores are defined to follow a bell curve, but if the raw scores don’t follow a bell curve, that’s important to know!
The concept of IQ as a quotient seems rooted in the history of testing children- “this 12 year old has a 16 year old’s development”- which isn’t very useful for adults. If we give a test for adults to Alice and Betty, and Alice has an IQ of 140 and Betty has an IQ of 100, that doesn’t mean Alice is 40% smarter than Betty; it means that Betty is 50th percentile and Alice is 99.6th percentile. But, in practice, we might want to know that it takes Betty 90 seconds to get a problem right 80% of the time, and it takes Alice 5 seconds to get it right 100% of the time, which is data we collected in order to get the official outputs of 140 and 100.
If we picture the concept of absolute intelligence as some kind of optimal information process with certain well defined characteristics whose lower and upper bounds are only determined by the laws of physics, I’m afraid human intelligence will be hardly comparable to it in any really meaningful way.
The Sentience Quotient is the closest thing I can think of, and it’s mostly good for describing why humans and trees have few productive conversations (though the upper bound is also interesting).
So, it’s possible that a raw score one year will mean a different thing another year. For the SAT and GRE, getting one question wrong on the math section will drop you tens of points- but how many varies from year to year. (Other scores are more stable; that one is corrupted by edge effects of the tremendous number of people who get all the quantitative questions correct.)
The point I was making is that, when IQ is calculated by age group, that’s evidence that there are raw score differentials between age groups. This paper shows a theoretical graph of what that would like in Figure 1. Also related is Figure 3, but it has a crazy axis and so I’m hesitant to apply it. (I’m having trouble finding actual raw score data out there.)
If age-related decline and death are unrelated to intelligence, then even though raw scores will decline with age, individual IQ will stay the same in expectation (beyond unavoidable random drift) because each person is compared to people whose scores have declined about as much as theirs.
When IQ is used as a measure of “where are you relative to your peers?”, you want this. When IQ is used as a measure of absolute intelligence, you don’t want this. This email by Eliezer comes to mind.
“The point I was making is that, when IQ is calculated by age group, that’s evidence that there are raw score differentials between age groups.”
Exactly, that is the point. Of course there is a certain age-related deterioration of intelligence, especially fluid intelligence. So even if he did the exact same test he already did decades ago, his raw score will surely be lower now than it was back then. Confusingly enough, he could still be said to be as “intelligent” as he was back then if his relative position within the IQ distribution hadn’t changed. (Which if we were to believe his recent IQ-test, actually happened).
If any of this is confusing it’s because IQ is a relative measurement. So if I were to say that he is as intelligent as he was decades ago in the context of an IQ test, that doesn’t mean that he would solve the same proportion of tasks correctly, or that there wasn’t any cognitive decline due to aging, but only that his relative position within the normal distribution of IQ scores hasn’t changed.
IQ tests never measure absolute intelligence. Since IQ means intelligence -quotient-, you always compare a score to other scores, so it’s not an absolute measure by definition—there is no absolute IQ test. I’m also not aware of any respectable existing test for absolute intelligence either, nor how exactly one might even look like, although I’m sure you could in principle construct one if you define the word intelligence in nonconfused terms that reflect actual reality, which seems like a monumental task.
If we picture the concept of absolute intelligence as some kind of optimal information process with certain well defined characteristics whose lower and upper bounds are only determined by the laws of physics, I’m afraid human intelligence will be hardly comparable to it in any really meaningful way. And more importantly, how could you even begin to make a reliable and valid measure of something like that in humans?
Right. Unfortunately, whenever someone wants to talk about absolute intelligence, “IQ” is the closest word/concept to that.
When you look at adult IQ tests, the raw score is decent measure of ‘absolute intelligence’ for most modern humans. Current tests have known problems with exceptional individuals (on either end) and some tests are more interested in determining the shape of someone’s intelligence (like, say, the subtests on the Woodcock Johnson) than others (like the Raven’s test, which only tests one thing). Comparing raw scores tells you useful things- about the effects of age, about the Flynn effect, about theoretical populations, and even about the distribution now. IQ scores are defined to follow a bell curve, but if the raw scores don’t follow a bell curve, that’s important to know!
The concept of IQ as a quotient seems rooted in the history of testing children- “this 12 year old has a 16 year old’s development”- which isn’t very useful for adults. If we give a test for adults to Alice and Betty, and Alice has an IQ of 140 and Betty has an IQ of 100, that doesn’t mean Alice is 40% smarter than Betty; it means that Betty is 50th percentile and Alice is 99.6th percentile. But, in practice, we might want to know that it takes Betty 90 seconds to get a problem right 80% of the time, and it takes Alice 5 seconds to get it right 100% of the time, which is data we collected in order to get the official outputs of 140 and 100.
The Sentience Quotient is the closest thing I can think of, and it’s mostly good for describing why humans and trees have few productive conversations (though the upper bound is also interesting).