I basically agree with your core point — that (reasonably smart) humans are generally intelligent, and there’s nowhere further to climb qualitatively than being “generally” intelligent, and that general intelligence is yes/no binary. I’ve been independently making some very similar arguments: that general intelligence is the ability to simulate any mathematical structure, chunk these structures into memory-efficient abstractions, then perform problem-solving in the resultant arbitrary mathematical environment.
But I think you’re underestimating the power of “merely quantitative” improvements: working-memory size, long-term memory size, faster processing speed, freedom from biases and instincts, etc.
As per Dave Lindbergh’s answer, we should expect humans to be as bad at general intelligence as they can get while still being powerful enough to escape evolution’s training loop. All of these quantitative variables are set as low as they can get.
In particular, most humans are not generally intelligent most of the time, and many probably don’t even know how to turn on their “general intelligence” at will. They use cached computations and heuristics instead, acting on autopilot. In turn, that makes our entire civilization (and any given organization in it) not generally intelligent all of the time as well, which would put obvious disadvantages to us in a conflict with a true optimizer.
As per tailcalled’s answer, any given human is potentially a universal problem-solver but in practice has only limited understanding of some limited domains.
As per John Wentworth’s answer, the ability to build abstraction-chains more quickly conveys you massive advantages. A very smart person today would trivially outplay any equally-intelligent caveman, or any equally-intelligent medieval lord, given the same resources and all of the relevant domain knowledge of their corresponding eras. And a superintelligent AI would be able to make as much cognitotech progress over us as we have over these cavemen/medievals, and so trivially destroy us.
The bottom line: Yeah, an ASI won’t be “qualitatively” more powerful than humans, so ant:human::human:ASI isn’t a precise mechanical analogy. But it’s a pretty good comparison of levels of effective cognitive power anyway.
I basically agree with your core point — that (reasonably smart) humans are generally intelligent, and there’s nowhere further to climb qualitatively than being “generally” intelligent, and that general intelligence is yes/no binary. I’ve been independently making some very similar arguments: that general intelligence is the ability to simulate any mathematical structure, chunk these structures into memory-efficient abstractions, then perform problem-solving in the resultant arbitrary mathematical environment.
But I think you’re underestimating the power of “merely quantitative” improvements: working-memory size, long-term memory size, faster processing speed, freedom from biases and instincts, etc.
As per Dave Lindbergh’s answer, we should expect humans to be as bad at general intelligence as they can get while still being powerful enough to escape evolution’s training loop. All of these quantitative variables are set as low as they can get.
In particular, most humans are not generally intelligent most of the time, and many probably don’t even know how to turn on their “general intelligence” at will. They use cached computations and heuristics instead, acting on autopilot. In turn, that makes our entire civilization (and any given organization in it) not generally intelligent all of the time as well, which would put obvious disadvantages to us in a conflict with a true optimizer.
As per tailcalled’s answer, any given human is potentially a universal problem-solver but in practice has only limited understanding of some limited domains.
As per John Wentworth’s answer, the ability to build abstraction-chains more quickly conveys you massive advantages. A very smart person today would trivially outplay any equally-intelligent caveman, or any equally-intelligent medieval lord, given the same resources and all of the relevant domain knowledge of their corresponding eras. And a superintelligent AI would be able to make as much cognitotech progress over us as we have over these cavemen/medievals, and so trivially destroy us.
The bottom line: Yeah, an ASI won’t be “qualitatively” more powerful than humans, so ant:human::human:ASI isn’t a precise mechanical analogy. But it’s a pretty good comparison of levels of effective cognitive power anyway.