History proves otherwise: even people ten times smarter than people like me produce no more extensive or revolutionary technological or scientific output,
I will go out on a limb and assert that this man has a higher-than-average IQ. However, for his statement to be true he would have to be what some call “profoundly mentally retarded”. That is, someone with an IQ below 25. To my knowledge, there have been an exceedingly small number of individuals in the range of 10x that IQ score—amongst them the highest IQ yet recorded. So there are real problems of scale in his underlying assumptions.
Only if you take ‘ten times smarter’ to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that’s not a meaningful operation; it’s essentially a type error. The obvious interpretation of ‘ten times smarter’ within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.
And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in many domains, but being far above average seems to confer little or no additional advantage. (My guess at the explanation: first, beyond a certain point you have to start making trade-offs from areas of brain function that IQ doesn’t measure; second, Amdahl’s law.)
I agree, that’s likely what Carrier was feeling when he wrote that sentence. But that doesn’t let him off the hook, because that way is even worse than Logos’! He’s using a definition of “times more intelligent” that is only really capable of differentiating between humans, and trying to apply it to something outside that domain.
I’m not sure if the following could be already encompassed in Amdahl’s law, but I think it was worth a comment. Very intelligent humans still need to operate through society to reach their goals. An IQ of 140 may be enough for you to discover and employ the best tools society puts at your disposal. An IQ of 180 (just an abstract example) may let you recognize new and more efficient patterns, but you then have to bend society to exploit them, and this usually means convincing people not as smart as you are, that may very well take a long time to grasp your ideas.
As an analogy, think being sent into the stone age. A Swiss knife here is a very useful tool. It’s not a revolutionary concept, it’s just better than stone knives in cutting meat and working with wood. On the other hand, a set of professional electrical tools, while in principle way more powerful, will be completely useless since you will have to find a way to charge their batteries before.
To me a more natural interpretation from a mathematical POV would use log-odds. So if the author is at the 90% mark, someone 10 times as smart occurs at the frequency of around 1 in 3 billion.
But yeah. In context, your way makes more sense, if only because it’s more charitable.
IQ is renormalized to the bell curve by definition, so multiplying it by 10 isn’t guaranteed to be a meaningful operation. And since we have no other way to measure intelligence, it’s not clear what Carrier meant by “10 times smarter”. For some easy interpretations (e.g. 10x serial speed or 10x parallelism) his claim seems trivially wrong.
It is a simple way of expressing “a lot,” but it’s also one that immediately raises the question “is there any meaningful sense in which anyone that smart has actually existed?”
Of course, when Carrier claims that the most remarkably intelligent people do not tend to be the most productive, while it’s clear what kind of individuals he has in mind, the obvious next question is “can we design machines that use their intelligence more productively than humans?” Considering how human brains actually work, this sounds like much less of a tall order than making AI that are more intelligent in a humanlike way.
Well, central limit theorem says it’s mostly a bell curve among humans (you could make a case for a bigger tail on the low end, but still mostly a bell curve). And you can always identify “0” with a random number generator. So multiplying by 10 seems okay to me.
Not that major. The assumptions are that there are many small, independent things that affect intelligence. These assumptions are wrong, in that there are many things that do not have a small effect at all. But to the extent that these (mostly bad things) are rare, you’ll just see a bell curve with slightly larger tails.
Why can we assume that all the little things affect intelligence independently? Are synergies obviously rare, and how rare do they have to be for the central limit theorem to apply? In the simplest alternative model I can think of, incremental advances could be multiplicative instead of additive, which gives a log-normal distribution instead of a bell curve. This case is uninteresting because you could just say you’re measuring e^intelligence instead of intelligence, but I can imagine more complicated cases.
Side note: I think it is not well known that for the quintessential normally distributed random variable, human height, the lognormal distribution is in fact an equally good fit. And on the other end of the variance spectrum: I became biased toward the lognormal distribution when I observed that it is a much better fit for social network degree distributions than the much-discussed power-law. It is a very versatile thing.
IQ is renormalized to the bell curve by definition, so multiplying it by 10 isn’t guaranteed to be a meaningful operation. And since we have no other way to measure intelligence, it’s not clear what Carrier meant by “10 times smarter”.
Well… IQ is meant to be a direct quantification of raw “intellectual capacity”. So while its distribution is relative given the history of tests thus far, it still remains a quantification. But, that being said, this only further exascerbates the point I’m really getting at here: the ‘logic’ the man used is… fuzzy.
IQ is meant to be a direct quantification of raw “intellectual capacity”.
No it isn’t; it is a framework for relative rankings. Developing some means of “direct quantification” would be a major intellectual achievement, which as a first step would require a good definition of intelligence. I have been thinking about this and while there are quite a few useful definitions of intelligence out there, they each have notable weaknesses, we are a long way from a good definition.
Just as thermometers are a tool that measures temperature as relative degrees, and a serious understanding of and definition of heat waited on the development of the statistical theory of molecular motions.
Just as thermometers are a tool that measures temperature as relative degrees,
Amusing—those are a direct quantification of temperature. Degrees Celsius for example goes to degrees Kelvin rather well. They use arbitrarily fixed points above zero K—but the IQ scale does not do this.
Now, of course, IQ is not g. And we have no means of quantifying g.
I think maybe you are under the misapprehension that by “intellectual capacity” I was saying “intelligence”. If I had meant “intelligence” I would have said “intelligence”.
It was the first thing that stood out to me, quite frankly, and it seemed a rather fundamental criticism of the clarity of thought of the author: the vast majority—it seemed to me—of his position rested upon a notion that was both faulty and exposed by my original posting.
Any other ‘dissection’ seems entirely unnecessary, in my eyes, given this.
It isn’t obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well—if not more so—as deeply-thought-out ones?
The very fact that “10 times as smart” is intelligible as merely “a lot more” requires certain underlying assumptions about the available space of intelligence, and that addresses the very fundamental assumptions of his writing.
It isn’t obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well—if not more so—as deeply-thought-out ones?
Declaring that “10 times as smart” must be a reference to IQ points and then proceeding to attempt to back that interpretation up despite the absurdity reveals something a whole lot more significant than a simple reference to “10 times as smart”.
Declaring that “10 times as smart” must be a reference to IQ points
I did nothing of the sort.
IQ is a standard measure of “smartness”.
then proceeding to attempt to back up the rather absurd judgement reveal something a whole lot more significant than a simple reference to “10 times as smart”.
Would you care to make a complete thought out of this?
I will go out on a limb and assert that this man has a higher-than-average IQ. However, for his statement to be true he would have to be what some call “profoundly mentally retarded”. That is, someone with an IQ below 25. To my knowledge, there have been an exceedingly small number of individuals in the range of 10x that IQ score—amongst them the highest IQ yet recorded.
which suggests that you believed that “ten times as smart” must map to “ten times the IQ score.” To go back to the thermometer reading issue you responded to earlier, yes, a thermometer reading corresponds directly to a scalar quantity, but ordinary thermometer readings aren’t in Kelvin, and neither do IQ tests measure from zero intelligence at zero IQ. Even if we assume that intelligence is a quantity that progresses linearly along the IQ scale (unlikely,) mapping “ten times as smart as IQ 25” to “IQ 250″ would be rather like mapping “ten times as hot as the reading of 12 degrees C on this thermometer” to “120 degrees C.”
mapping “ten times as smart as IQ 25” to “IQ 250″ would be rather like mapping “ten times as hot as the reading of 12 degrees C on this thermometer” to “120 degrees C.”
Or, for that matter, mapping “ten times as hot as a 6” to a 60.
which suggests that you believed that “ten times as smart” must map to “ten times the IQ score.”
To the extent that IQ is the only quantified form of intelligence yet known, yes, that’s absolutely true.
, but ordinary thermometer readings aren’t in Kelvin,
Celsius is Kelvin + a number. Fahrenheit is Kelvin + a number + ratio conversion. This is a total non-starter for your position.
and neither do IQ tests measure from zero intelligence at zero IQ.
0 IQ however does have an intelligible meaning. This again is a total non-starter. Nothing in the observable universe exists at 0K. We “measure” above 0K. No intelligent being actually has 0 IQ. We “measure” above 0 IQ.
Kelvin is quantified temperature relative to absolute zero. IQ is quantified intelligence relative to zero where the units are adjusted to fit the current history of measurements.
mapping “ten times as smart as IQ 25” to “IQ 250″ would be rather like mapping “ten times as hot as the reading of 12 degrees C on this thermometer” to “120 degrees C.”
No, it would be exactly like asserting that 120K is 10x the temperature of 12K.
If you’d read the link that I originally gave, you’ll note that the “profoundly mentally retarded” goes from zero to twenty-five.
Zero, in this case, means without intellect at all.
An IQ of 0 corresponds to 6.66 standard deviations below the mean. It’s functionally unmeasurable (when a person is too stupid to take even the tests specially calibrated for people with exceptionally low intelligence, their IQ is too low to quantify,) but an IQ of 0 does not correspond to “zero intellect.” “Profound mental retardation” has no defined lower limit, only an upper one. You can also check the link you provided yourself, and you will find that it does not actually make any mention of a lower limit of zero, it defines profound mental retardation as being “<= 20-25”
An IQ of 100 does not correspond to 100 Intelligence Units, where an entity with zero Intelligence Units has no intelligence, it is simply the defined average of the population, and IQ tests are re-normed to have an average of 100 when the average intelligence changes. IQ points are meant to define where a person falls on the normal curve of human intelligence, not quantify intelligence on an absolute scale.
You can also check the link you provided yourself, and you will find that it does not actually make any mention of a lower limit of zero, it defines profound mental retardation as being “<= 20-25”
Of course it does. That conforms to the same standard.
but an IQ of 0 does not correspond to “zero intellect.” “Profound mental retardation” has no defined lower limit, only an upper one.
Do me a favor. Find an instance of a person with a zero or a negative IQ score. Then this will be meaningful.
An IQ of 0 corresponds to 6.66 standard deviations below the mean. It’s functionally unmeasurable
Yup.
An IQ of 100 does not correspond to 100 Intelligence Units, where an entity with zero Intelligence Units has no intelligence, it is simply the defined average of the population, and IQ tests are re-normed to have an average of 100 when the average intelligence changes.
What exactly makes you believe these are mutually exclusive statements? That the quantification itself is adjusted speaks to the rule-standard, not to invalidity of the notion of the absolute-zero.
IQ points are meant to define where a person falls on the normal curve of human intelligence, not quantify intelligence on an absolute scale.
Again; what exactly gives you the notion that these are mutually exclusive?
The two are not mutually exclusive; if we knew the relationship between the null point for intelligence and the human average, we could norm the test so that average was defined as 100 and 0 was defined as no intelligence, but we don’t, and if we did that then we would no longer have a definitional standard deviation of 15.
A less misleading way to express IQ scores would be to norm them to 0, to make it clear that they represent deviation above and below the mean and exist without reference to the null point for intelligence.
Do me a favor. Find an instance of a person with a zero or a negative IQ score. Then this will be meaningful.
Such an individual would be rarer than one in twenty billion, as would an individual with IQ over 200.
Such an individual would be rarer than one in twenty billion, as would an individual with IQ over 200.
Multiple such individuals (of the latter category) are on record. I linked originally to a woman with an IQ of 230.
Why, then, have no 0 or negative individuals ever been recorded, if it is purely a question of how far one deviates from the 100 mark?
but we don’t, and if we did that then we would no longer have a definitional standard deviation of 15.
That’s absurd. We would always need a metric standard;; a ‘measuring stick’ against which to determine the units of quantification. That quantification is where the definition of 100 +/- one standard deviation comes from, and why it is useful. This is tiresome: 100 is average, and 0 is non-intelligent, and we do still need the definitional standard of the standard deviation. For the same reason that we also have a specific object that masses one newton. It’s how the quantification is defined.
Or are you going to argue that because we use a class of observations (with error estimates) for mass, that means that an object with zero mass doesn’t have no mass?
0 IQ “means” “no intelligence”. A quotient is a term of quantification. Having a quotient score of zero means said object is quantified at zero.
That’s a way of saying “none”. IQ == 0 “means” “non-intelligent.” They’re synonymous expressions!
Multiple such individuals (of the latter category) are on record. I linked originally to a woman with an IQ of 230.
Such scores have been issued, but are widely regarded as nonsense, and Marilyn Vos Savant’s 200+ score is no longer given credence in the record books. The old formula (mental age divided by chronological age x 100) allowed for a number of individuals with scores over 200, and did not allow for negative scores, but it was flawed in many ways and has been discarded, and no individual has received a score over 200 from a proper application of the IQ test since then.
This is tiresome: 100 is average, and 0 is non-intelligent, and we do still need the definitional standard of the standard deviation. For the same reason that we also have a specific object that masses one newton. It’s how the quantification is defined.
Show me where such a definition is laid out.
Deviation measurements and absolute measurements both serve their purposes, but we don’t have any absolute measurements for intelligence. The IQ test is not and was never intended to be an absolute measurement of intelligence in the way that newtons are a measurement of force. Comparing IQ to temperature, it’s like defining the average particle kinetic energy in a vessel to be 100, with one standard deviation in kinetic energy being 15, without knowing what the temperature inside the vessel is.
I will go out on a limb and assert that this man has a higher-than-average IQ. However, for his statement to be true he would have to be what some call “profoundly mentally retarded”. That is, someone with an IQ below 25. To my knowledge, there have been an exceedingly small number of individuals in the range of 10x that IQ score—amongst them the highest IQ yet recorded. So there are real problems of scale in his underlying assumptions.
Only if you take ‘ten times smarter’ to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that’s not a meaningful operation; it’s essentially a type error. The obvious interpretation of ‘ten times smarter’ within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.
And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in many domains, but being far above average seems to confer little or no additional advantage. (My guess at the explanation: first, beyond a certain point you have to start making trade-offs from areas of brain function that IQ doesn’t measure; second, Amdahl’s law.)
I agree, that’s likely what Carrier was feeling when he wrote that sentence. But that doesn’t let him off the hook, because that way is even worse than Logos’! He’s using a definition of “times more intelligent” that is only really capable of differentiating between humans, and trying to apply it to something outside that domain.
I’m not sure if the following could be already encompassed in Amdahl’s law, but I think it was worth a comment. Very intelligent humans still need to operate through society to reach their goals. An IQ of 140 may be enough for you to discover and employ the best tools society puts at your disposal. An IQ of 180 (just an abstract example) may let you recognize new and more efficient patterns, but you then have to bend society to exploit them, and this usually means convincing people not as smart as you are, that may very well take a long time to grasp your ideas.
As an analogy, think being sent into the stone age. A Swiss knife here is a very useful tool. It’s not a revolutionary concept, it’s just better than stone knives in cutting meat and working with wood. On the other hand, a set of professional electrical tools, while in principle way more powerful, will be completely useless since you will have to find a way to charge their batteries before.
Yup, that’s the way I interpreted it too—going from top 1% to top 0.1%.
To me a more natural interpretation from a mathematical POV would use log-odds. So if the author is at the 90% mark, someone 10 times as smart occurs at the frequency of around 1 in 3 billion.
But yeah. In context, your way makes more sense, if only because it’s more charitable.
IQ is renormalized to the bell curve by definition, so multiplying it by 10 isn’t guaranteed to be a meaningful operation. And since we have no other way to measure intelligence, it’s not clear what Carrier meant by “10 times smarter”. For some easy interpretations (e.g. 10x serial speed or 10x parallelism) his claim seems trivially wrong.
“10 times” just means “a lot”. I’m more curious about what Carrier meant by “smart”.
It is a simple way of expressing “a lot,” but it’s also one that immediately raises the question “is there any meaningful sense in which anyone that smart has actually existed?”
Of course, when Carrier claims that the most remarkably intelligent people do not tend to be the most productive, while it’s clear what kind of individuals he has in mind, the obvious next question is “can we design machines that use their intelligence more productively than humans?” Considering how human brains actually work, this sounds like much less of a tall order than making AI that are more intelligent in a humanlike way.
Well, central limit theorem says it’s mostly a bell curve among humans (you could make a case for a bigger tail on the low end, but still mostly a bell curve). And you can always identify “0” with a random number generator. So multiplying by 10 seems okay to me.
Only subject to some major assumptions.
Not that major. The assumptions are that there are many small, independent things that affect intelligence. These assumptions are wrong, in that there are many things that do not have a small effect at all. But to the extent that these (mostly bad things) are rare, you’ll just see a bell curve with slightly larger tails.
Why can we assume that all the little things affect intelligence independently? Are synergies obviously rare, and how rare do they have to be for the central limit theorem to apply? In the simplest alternative model I can think of, incremental advances could be multiplicative instead of additive, which gives a log-normal distribution instead of a bell curve. This case is uninteresting because you could just say you’re measuring e^intelligence instead of intelligence, but I can imagine more complicated cases.
Side note: I think it is not well known that for the quintessential normally distributed random variable, human height, the lognormal distribution is in fact an equally good fit. And on the other end of the variance spectrum: I became biased toward the lognormal distribution when I observed that it is a much better fit for social network degree distributions than the much-discussed power-law. It is a very versatile thing.
Good point.
Well… IQ is meant to be a direct quantification of raw “intellectual capacity”. So while its distribution is relative given the history of tests thus far, it still remains a quantification. But, that being said, this only further exascerbates the point I’m really getting at here: the ‘logic’ the man used is… fuzzy.
No it isn’t; it is a framework for relative rankings. Developing some means of “direct quantification” would be a major intellectual achievement, which as a first step would require a good definition of intelligence. I have been thinking about this and while there are quite a few useful definitions of intelligence out there, they each have notable weaknesses, we are a long way from a good definition.
Just as thermometers are a tool that measures temperature as relative degrees, and a serious understanding of and definition of heat waited on the development of the statistical theory of molecular motions.
Amusing—those are a direct quantification of temperature. Degrees Celsius for example goes to degrees Kelvin rather well. They use arbitrarily fixed points above zero K—but the IQ scale does not do this.
Now, of course, IQ is not
g
. And we have no means of quantifyingg
.I think maybe you are under the misapprehension that by “intellectual capacity” I was saying “intelligence”. If I had meant “intelligence” I would have said “intelligence”.
This is a needlessly pedantic response to a comment which can be dissected in many other ways.
It was the first thing that stood out to me, quite frankly, and it seemed a rather fundamental criticism of the clarity of thought of the author: the vast majority—it seemed to me—of his position rested upon a notion that was both faulty and exposed by my original posting.
Any other ‘dissection’ seems entirely unnecessary, in my eyes, given this.
It isn’t obvious to you that this a fairly off the cuff response, and that “10 times” is used in a slightly colloqial way to mean “a lot more”?
It isn’t obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well—if not more so—as deeply-thought-out ones?
The very fact that “10 times as smart” is intelligible as merely “a lot more” requires certain underlying assumptions about the available space of intelligence, and that addresses the very fundamental assumptions of his writing.
Declaring that “10 times as smart” must be a reference to IQ points and then proceeding to attempt to back that interpretation up despite the absurdity reveals something a whole lot more significant than a simple reference to “10 times as smart”.
I did nothing of the sort.
IQ is a standard measure of “smartness”.
Would you care to make a complete thought out of this?
What you said was
which suggests that you believed that “ten times as smart” must map to “ten times the IQ score.” To go back to the thermometer reading issue you responded to earlier, yes, a thermometer reading corresponds directly to a scalar quantity, but ordinary thermometer readings aren’t in Kelvin, and neither do IQ tests measure from zero intelligence at zero IQ. Even if we assume that intelligence is a quantity that progresses linearly along the IQ scale (unlikely,) mapping “ten times as smart as IQ 25” to “IQ 250″ would be rather like mapping “ten times as hot as the reading of 12 degrees C on this thermometer” to “120 degrees C.”
Or, for that matter, mapping “ten times as hot as a 6” to a 60.
To the extent that IQ is the only quantified form of intelligence yet known, yes, that’s absolutely true.
Celsius is Kelvin + a number. Fahrenheit is Kelvin + a number + ratio conversion. This is a total non-starter for your position.
0 IQ however does have an intelligible meaning. This again is a total non-starter. Nothing in the observable universe exists at 0K. We “measure” above 0K. No intelligent being actually has 0 IQ. We “measure” above 0 IQ.
Kelvin is quantified temperature relative to absolute zero. IQ is quantified intelligence relative to zero where the units are adjusted to fit the current history of measurements.
No, it would be exactly like asserting that 120K is 10x the temperature of 12K.
If you’d read the link that I originally gave, you’ll note that the “profoundly mentally retarded” goes from zero to twenty-five.
Zero, in this case, means without intellect at all.
An IQ of 0 corresponds to 6.66 standard deviations below the mean. It’s functionally unmeasurable (when a person is too stupid to take even the tests specially calibrated for people with exceptionally low intelligence, their IQ is too low to quantify,) but an IQ of 0 does not correspond to “zero intellect.” “Profound mental retardation” has no defined lower limit, only an upper one. You can also check the link you provided yourself, and you will find that it does not actually make any mention of a lower limit of zero, it defines profound mental retardation as being “<= 20-25”
An IQ of 100 does not correspond to 100 Intelligence Units, where an entity with zero Intelligence Units has no intelligence, it is simply the defined average of the population, and IQ tests are re-normed to have an average of 100 when the average intelligence changes. IQ points are meant to define where a person falls on the normal curve of human intelligence, not quantify intelligence on an absolute scale.
I suppose that would have to be “Dead and all matter in a state of maximum entropy”.
Of course it does. That conforms to the same standard.
Do me a favor. Find an instance of a person with a zero or a negative IQ score. Then this will be meaningful.
Yup.
What exactly makes you believe these are mutually exclusive statements? That the quantification itself is adjusted speaks to the rule-standard, not to invalidity of the notion of the absolute-zero.
Again; what exactly gives you the notion that these are mutually exclusive?
The two are not mutually exclusive; if we knew the relationship between the null point for intelligence and the human average, we could norm the test so that average was defined as 100 and 0 was defined as no intelligence, but we don’t, and if we did that then we would no longer have a definitional standard deviation of 15.
A less misleading way to express IQ scores would be to norm them to 0, to make it clear that they represent deviation above and below the mean and exist without reference to the null point for intelligence.
Such an individual would be rarer than one in twenty billion, as would an individual with IQ over 200.
Multiple such individuals (of the latter category) are on record. I linked originally to a woman with an IQ of 230.
Why, then, have no 0 or negative individuals ever been recorded, if it is purely a question of how far one deviates from the 100 mark?
That’s absurd. We would always need a metric standard;; a ‘measuring stick’ against which to determine the units of quantification. That quantification is where the definition of 100 +/- one standard deviation comes from, and why it is useful. This is tiresome: 100 is average, and 0 is non-intelligent, and we do still need the definitional standard of the standard deviation. For the same reason that we also have a specific object that masses one newton. It’s how the quantification is defined.
Or are you going to argue that because we use a class of observations (with error estimates) for mass, that means that an object with zero mass doesn’t have no mass?
0 IQ “means” “no intelligence”. A quotient is a term of quantification. Having a quotient score of zero means said object is quantified at zero.
That’s a way of saying “none”. IQ == 0 “means” “non-intelligent.” They’re synonymous expressions!
Such scores have been issued, but are widely regarded as nonsense, and Marilyn Vos Savant’s 200+ score is no longer given credence in the record books. The old formula (mental age divided by chronological age x 100) allowed for a number of individuals with scores over 200, and did not allow for negative scores, but it was flawed in many ways and has been discarded, and no individual has received a score over 200 from a proper application of the IQ test since then.
Show me where such a definition is laid out.
Deviation measurements and absolute measurements both serve their purposes, but we don’t have any absolute measurements for intelligence. The IQ test is not and was never intended to be an absolute measurement of intelligence in the way that newtons are a measurement of force. Comparing IQ to temperature, it’s like defining the average particle kinetic energy in a vessel to be 100, with one standard deviation in kinetic energy being 15, without knowing what the temperature inside the vessel is.
There was a missing “s” at the end of “reveal”, apart from that it was correctly formed (if inelegant) as stated.
Alright, then, a few questions.
At what point did I assert that it “10 times as smart” must be a reference to IQ, as opposed to using IQ to illustrate the point made?
What exactly is so absurd about even that?
g
and IQ are correlated, especially at the lower numbers of IQ.What exactly is this thing that is being revealed by this “absurdity”?