“A person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can understand, no matter how many time and notebooks they have.”
I don’t see how. Whatever truth it might hold for a 140⁄100 IQ gap, it doesn’t hold for arbitrarily smart beings, who could tell a lesser being all the self-modifying it would need to do to reach cognitive parity.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
The point being made seems to be the contradiction of a common AI-theorist, futurist position.
That position being that once a computer algorithm of effective AI IQ of 100 is produced, it can increase it’s intelligence to arbitrary levels by the addition of additional hard-drive space, ram, and processing power.
IMO the analogy slightly fails. It fails to include anything analogous to the increase in ram, which is a very important factor, as it allows complex concepts to be dealt with as a whole.
The original quote said, “the most difficult subjects can be explained to the most slow-witted man”. This contradicts in my opinion what Michael Anissimov (Media Director, SIAI) thinks to be the case, namely that “a person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can”.
Michael Anissimov is responsible for compiling, distributing, and promoting SIAI media materials.
I was just being pedantic here but thought that highlighting this point would be good as other people, like Greg Egan, seem to disagree. This is an important question regarding the dangers posed by AI.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
Infinite time and space. That’s a lot of time and space. I suspect my warning lights would go off too. Do people make claims like that often?
It’s an open question. I just don’t know if that is the case and I’m very curious to know more about this. The chimp-human example is very convincing. Further someone with down-syndrome probably cannot understand what you can comprehend. So where is the gap, is there one? It looks like. This says, yes, I should believe what the SIAI claims. However, the original quote claims that “the most difficult subjects can be explained to the most slow-witted man”. If this was the case, it would hint at the possibility that superhuman AI would merely be superhuman fast and had superhuman memory and ram, yet wouldn’t reside on a different conceptual level.
Both Anissimov and Tolstoy appear to me to be engaging in hyperbole. Further, Tolstoy was writing in the end of the nineteenth century—computers didn’t exist, much less the idea of general artificial intelligence.
Give an example of something that you believe the average person, given 200 years to study completely focused on that task, couldn’t possibly understand.
It seems pretty intuitively obvious—unless the notebooks have special things written on them.
Intuitively obvious but also wrong. IQ primarily makes a difference in how long it takes to master something and which knowledge can be created independently, without the need to load it from a notebook. Both of these factors have been assumed in infinite measure. Obviously IQ will make a difference in how effectively those concepts can be applied.
I make the tentative prediction “There are no concepts that can be discovered by typical people with an IQ of 140 that can not be understood by an average person with IQ of 100, no matter how much time and how many notebooks the latter is given.” I would also be comfortable extending the IQ limit to 200, so long as a clear condition of (neuro)typical is maintained.
You know, I’m going to remember that one and try to remind myself of it whenever I notice my status is interfering with my thoughts.
This quote seems undermine the SIAI.
-- Michael Anissimov
I don’t see how. Whatever truth it might hold for a 140⁄100 IQ gap, it doesn’t hold for arbitrarily smart beings, who could tell a lesser being all the self-modifying it would need to do to reach cognitive parity.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
The point being made seems to be the contradiction of a common AI-theorist, futurist position.
That position being that once a computer algorithm of effective AI IQ of 100 is produced, it can increase it’s intelligence to arbitrary levels by the addition of additional hard-drive space, ram, and processing power.
IMO the analogy slightly fails. It fails to include anything analogous to the increase in ram, which is a very important factor, as it allows complex concepts to be dealt with as a whole.
The original quote said, “the most difficult subjects can be explained to the most slow-witted man”. This contradicts in my opinion what Michael Anissimov (Media Director, SIAI) thinks to be the case, namely that “a person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can”.
I was just being pedantic here but thought that highlighting this point would be good as other people, like Greg Egan, seem to disagree. This is an important question regarding the dangers posed by AI.
Infinite time and space. That’s a lot of time and space. I suspect my warning lights would go off too. Do people make claims like that often?
Anissimov sure did: “no matter how many time and notebooks they have.”
I’m not sure what your point is exactly, but if it is anything like “That quote by Anissimov is largely mistaken” then you are correct.
It’s an open question. I just don’t know if that is the case and I’m very curious to know more about this. The chimp-human example is very convincing. Further someone with down-syndrome probably cannot understand what you can comprehend. So where is the gap, is there one? It looks like. This says, yes, I should believe what the SIAI claims. However, the original quote claims that “the most difficult subjects can be explained to the most slow-witted man”. If this was the case, it would hint at the possibility that superhuman AI would merely be superhuman fast and had superhuman memory and ram, yet wouldn’t reside on a different conceptual level.
Both Anissimov and Tolstoy appear to me to be engaging in hyperbole. Further, Tolstoy was writing in the end of the nineteenth century—computers didn’t exist, much less the idea of general artificial intelligence.
How do you mean?
While a perfectly valid, and somewhat relevant, point if true, that position does need support.
It seems pretty intuitively obvious—unless the notebooks have special things written on them.
Really?
Give an example of something that you believe the average person, given 200 years to study completely focused on that task, couldn’t possibly understand.
Intuitively obvious but also wrong. IQ primarily makes a difference in how long it takes to master something and which knowledge can be created independently, without the need to load it from a notebook. Both of these factors have been assumed in infinite measure. Obviously IQ will make a difference in how effectively those concepts can be applied.
I make the tentative prediction “There are no concepts that can be discovered by typical people with an IQ of 140 that can not be understood by an average person with IQ of 100, no matter how much time and how many notebooks the latter is given.” I would also be comfortable extending the IQ limit to 200, so long as a clear condition of (neuro)typical is maintained.