I don’t see anything in that scenario that prevents a human-level AGI from using a collection of superintelligent tool AIs with a better interface to achieve feats of intelligence that humans cannot, even with the same tool AIs.
At that point, it wouldn’t functionally be different than a series of tool AIs being controlled directly by a human operator. If that poses risk, then mitigations could be extrapolated to the combined-system scenario.
What fundamental law of the universe would set a limit right there, out of all possible capacities across every possible form of computing substrate?
I’m not trying to imply there is something about the human mind specifically that forces a limit to computing power, I just used that as a benchmark as that is the only frame of reference that we have. If it is dumber or slightly smarter than a human on the same order of magnitude, that doesn’t really matter.
The concept of a trade-off is simply saying that the more complex a system is to imitate consciousness, the more computational ability is sacrificed, tending towards some lower bound of computational substrate that one may not count as superintelligent. I’m not saying I have any physical or informational-theoretical law in mind for that currently, though.
At that point, it wouldn’t functionally be different than a series of tool AIs being controlled directly by a human operator. If that poses risk, then mitigations could be extrapolated to the combined-system scenario.
I’m not trying to imply there is something about the human mind specifically that forces a limit to computing power, I just used that as a benchmark as that is the only frame of reference that we have. If it is dumber or slightly smarter than a human on the same order of magnitude, that doesn’t really matter.
The concept of a trade-off is simply saying that the more complex a system is to imitate consciousness, the more computational ability is sacrificed, tending towards some lower bound of computational substrate that one may not count as superintelligent. I’m not saying I have any physical or informational-theoretical law in mind for that currently, though.