It implies no such thing; hence my asking for ideas rather than presenting them. The only thing we know for certain is that, due to how IQ tests are measured and calibrated, there is no particular reason why they SHOULD represent an actual, consistent metric—they merely note where on the bell curve of values you are, not what actual value that point on the bell curve represents. (At core, of course, it simply represents “number of questions on a particular IQ test that you got right”, and everyone agrees that that metric is measuring SOMETHING about intelligence, but it would be nice to have a more formal metric for “smartness” that actually has real-world consequences.)
ETA: I certainly have an intuitive idea for what “smartness” would mean as an actual quantifiable thing, which seems to have something to do with pattern-recognition / signal-extraction performance across a wide range of noisy media. This makes some sense to me, since IQ tests—especially the ones that attempt to avoid linguistic bias—typically involve pattern-matching and similar signal extraction/prediction tasks. So intuitively, I think intelligence will have units of Entropy per [Kolmogorov complexity x time], and any unit which measures “one average 100 IQ human” worth of Smartness will have some ungodly constant-of-conversion comparable to Avogadro’s number.
NOTE 2: Like I said, this is an intuitive sense, which I have not done ANY formal processing on.
Well, you need some framework. You said that IQ points are not “necessarily a consistent metric for cognitive function”. First, what is “cognitive function” and how do you want to measure it? If you have no alternate metrics then how do you know IQ points are inconsistent and what do you compare them to?
everyone agrees that that metric is measuring SOMETHING about intelligence
The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.
Saying that IQ measures g is like saying that flow through a mountain creek measures snowmelt. More of one generally means more of the other, but there’s a bunch of fiddly little details (maybe someone’s airlifting water onto a forest fire upstream, or filling their swimming pool) that add up to a substantial deviation—and there are still a lot of unanswered questions about the way they relate to each other.
In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve. The existence of a g factor tells you that you can probably teach music more easily to someone who’s good at math, but it doesn’t tell you what to look for in a CT scan, or whether working memory, say, will scale linearly or geometrically or in some other way with IQ; those are separate questions.
g was observed as a correlation between test scores. That is by definition a scalar value, but we don’t know exactly how the underlying mechanism works or how it can be modeled; we just know that it’s not very domain-specific. It’s the underlying mechanism, not the correlation value, that I was referring to in the grandparent, and I’m pretty sure it’s what ialdabaoth is referring to as well.
The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.
Absolutely, but +n g doesn’t necessarily mean +m IQ for all (n,m).
I don’t understand what that means.
Here’s a place where my intuition’s going to struggle to formulate good words for this.
An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior. A
“proper” quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior, on average, given an input with n bits of Entropy, and t seconds to crunch on those bits. Whether “Utility” is measured in units similar to Kolmogorov complexity is questionable, but that’s what my naive intuition yanked out when grasping for units.
But the point is, whatever we actually choose to measure g in, the term “+1 g” should make sense, and should mean the same thing regardless of what our current g is. IQ, being merely a statistical fit onto a gaussian distribution, does NOT do that.
but +n g doesn’t necessarily mean +m IQ for all (n,m)
This phrase implies that you have a metric for g (different from IQ points) because without it the expression “+n g” has no meaning.
An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior.
Okay. To be precise we are talking about Shannon entropy and these units are bits.
A “proper” quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior
Hold on. What is this Utility thing? I don’t see how it fits in the context in which we are talking. You are now introducing things like goals and values. Kolmogorov complexity is a measure of complexity, what does it have to do with utility?
the term “+1 g” should make sense, and should mean the same thing regardless of what our current g is
I don’t see this as obvious. Why?
IQ, being merely a statistical fit onto a gaussian distribution
Not so. IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn’t a particularly distorting operation to do. It is not fit onto a gaussian distribution.
IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn’t a particularly distorting operation to do. It is not fit onto a gaussian distribution.
I’m afraid you’re mistaken here. IQ scores are generally derived from a set of raw test scores by fitting them to a normal distribution with mean 100 and SD of 15 (sometimes 16): IQ 70 is thus defined as a score two standard deviations below the mean. It’s not a linear rescaling, unless the question pool just happens to give you a normal distribution of raw scores.
This implies you have another metric for cognitive function which an IQ point does not match. What is that another metric?
It implies no such thing; hence my asking for ideas rather than presenting them. The only thing we know for certain is that, due to how IQ tests are measured and calibrated, there is no particular reason why they SHOULD represent an actual, consistent metric—they merely note where on the bell curve of values you are, not what actual value that point on the bell curve represents. (At core, of course, it simply represents “number of questions on a particular IQ test that you got right”, and everyone agrees that that metric is measuring SOMETHING about intelligence, but it would be nice to have a more formal metric for “smartness” that actually has real-world consequences.)
ETA: I certainly have an intuitive idea for what “smartness” would mean as an actual quantifiable thing, which seems to have something to do with pattern-recognition / signal-extraction performance across a wide range of noisy media. This makes some sense to me, since IQ tests—especially the ones that attempt to avoid linguistic bias—typically involve pattern-matching and similar signal extraction/prediction tasks. So intuitively, I think intelligence will have units of Entropy per [Kolmogorov complexity x time], and any unit which measures “one average 100 IQ human” worth of Smartness will have some ungodly constant-of-conversion comparable to Avogadro’s number.
NOTE 2: Like I said, this is an intuitive sense, which I have not done ANY formal processing on.
Well, you need some framework. You said that IQ points are not “necessarily a consistent metric for cognitive function”. First, what is “cognitive function” and how do you want to measure it? If you have no alternate metrics then how do you know IQ points are inconsistent and what do you compare them to?
The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.
I don’t understand what that means.
Saying that IQ measures g is like saying that flow through a mountain creek measures snowmelt. More of one generally means more of the other, but there’s a bunch of fiddly little details (maybe someone’s airlifting water onto a forest fire upstream, or filling their swimming pool) that add up to a substantial deviation—and there are still a lot of unanswered questions about the way they relate to each other.
In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve. The existence of a g factor tells you that you can probably teach music more easily to someone who’s good at math, but it doesn’t tell you what to look for in a CT scan, or whether working memory, say, will scale linearly or geometrically or in some other way with IQ; those are separate questions.
g is an unobserved value, a scalar. It cannot say anything about “causes of intelligence” or shapes of curves. It doesn’t aim to.
g was observed as a correlation between test scores. That is by definition a scalar value, but we don’t know exactly how the underlying mechanism works or how it can be modeled; we just know that it’s not very domain-specific. It’s the underlying mechanism, not the correlation value, that I was referring to in the grandparent, and I’m pretty sure it’s what ialdabaoth is referring to as well.
To be more precise, the existence of g was derived from observing the correlation of test scores.
Moreover, g itself is not the correlation, it is the unobservable underlying factor which we assume to cause the correlation.
It is still a scalar-valued characteristic of a person, not a mechanism.
Absolutely, but +n g doesn’t necessarily mean +m IQ for all (n,m).
Here’s a place where my intuition’s going to struggle to formulate good words for this.
An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior. A “proper” quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior, on average, given an input with n bits of Entropy, and t seconds to crunch on those bits. Whether “Utility” is measured in units similar to Kolmogorov complexity is questionable, but that’s what my naive intuition yanked out when grasping for units.
But the point is, whatever we actually choose to measure g in, the term “+1 g” should make sense, and should mean the same thing regardless of what our current g is. IQ, being merely a statistical fit onto a gaussian distribution, does NOT do that.
This phrase implies that you have a metric for g (different from IQ points) because without it the expression “+n g” has no meaning.
Okay. To be precise we are talking about Shannon entropy and these units are bits.
Hold on. What is this Utility thing? I don’t see how it fits in the context in which we are talking. You are now introducing things like goals and values. Kolmogorov complexity is a measure of complexity, what does it have to do with utility?
I don’t see this as obvious. Why?
Not so. IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn’t a particularly distorting operation to do. It is not fit onto a gaussian distribution.
I’m afraid you’re mistaken here. IQ scores are generally derived from a set of raw test scores by fitting them to a normal distribution with mean 100 and SD of 15 (sometimes 16): IQ 70 is thus defined as a score two standard deviations below the mean. It’s not a linear rescaling, unless the question pool just happens to give you a normal distribution of raw scores.
Hm. A quick look around finds this which says that raw scores are standardized by forcing them to the mean of 100 and the standard deviation of 15.
This is a linear transformation and it does not fit anything to a gaussian distribution.
Of course this is just stackexchange—do you happen to have links to how “proper” IQ test are supposed to convert raw scores into IQ points?
If the difficulty of the questions can’t be properly quantified, what exactly do the raw scores tell you?
See the first sentence of the penultimate paragraph of this.