I don’t associate autonomy with consciousness at all, so I’m unsure how to deal with the option “no self-awareness or autonomy associated with consciousness”. But let’s just take it as an unexplained counterfactual and go from there.
I don’t see anything in that scenario that prevents a human-level AGI from using a collection of superintelligent tool AIs with a better interface to achieve feats of intelligence that humans cannot, even with the same tool AIs. I’m not sure how much this differs from simply calling the combined system “superintelligent AGI”.
But even that much restriction seems extremely unlikely to me. There does not seem to be any physical or information-theoretic reason why general intelligence cannot go faster, broader, and deeper than some arbitrary biological life-form that merely happened to be the first at achieving some useful level of flexibility in world modelling.
What fundamental law of the universe would set a limit right there, out of all possible capacities across every possible form of computing substrate? Even a priori it seems unlikely on the order of 1%, by considering a logarithmic scale of where such a bar could be set. Given what we can deduce about possible other computing substrates and the limited size of human brains compared with the universe, it seems much less likely still.
The only thing that looks like it could possibly interfere is something like an unknown nonphysical principle of consciousness that somehow, despite all evidence to the contrary, actually turns out to be required for general intelligence and has fundamental reasons why it can’t get any faster or more capable no matter what physical system it might be associated with. I give that very poor odds indeed, and much less still that it is due to “a certain trade-off in computer science”.
I don’t see anything in that scenario that prevents a human-level AGI from using a collection of superintelligent tool AIs with a better interface to achieve feats of intelligence that humans cannot, even with the same tool AIs.
At that point, it wouldn’t functionally be different than a series of tool AIs being controlled directly by a human operator. If that poses risk, then mitigations could be extrapolated to the combined-system scenario.
What fundamental law of the universe would set a limit right there, out of all possible capacities across every possible form of computing substrate?
I’m not trying to imply there is something about the human mind specifically that forces a limit to computing power, I just used that as a benchmark as that is the only frame of reference that we have. If it is dumber or slightly smarter than a human on the same order of magnitude, that doesn’t really matter.
The concept of a trade-off is simply saying that the more complex a system is to imitate consciousness, the more computational ability is sacrificed, tending towards some lower bound of computational substrate that one may not count as superintelligent. I’m not saying I have any physical or informational-theoretical law in mind for that currently, though.
I don’t associate autonomy with consciousness at all, so I’m unsure how to deal with the option “no self-awareness or autonomy associated with consciousness”. But let’s just take it as an unexplained counterfactual and go from there.
I don’t see anything in that scenario that prevents a human-level AGI from using a collection of superintelligent tool AIs with a better interface to achieve feats of intelligence that humans cannot, even with the same tool AIs. I’m not sure how much this differs from simply calling the combined system “superintelligent AGI”.
But even that much restriction seems extremely unlikely to me. There does not seem to be any physical or information-theoretic reason why general intelligence cannot go faster, broader, and deeper than some arbitrary biological life-form that merely happened to be the first at achieving some useful level of flexibility in world modelling.
What fundamental law of the universe would set a limit right there, out of all possible capacities across every possible form of computing substrate? Even a priori it seems unlikely on the order of 1%, by considering a logarithmic scale of where such a bar could be set. Given what we can deduce about possible other computing substrates and the limited size of human brains compared with the universe, it seems much less likely still.
The only thing that looks like it could possibly interfere is something like an unknown nonphysical principle of consciousness that somehow, despite all evidence to the contrary, actually turns out to be required for general intelligence and has fundamental reasons why it can’t get any faster or more capable no matter what physical system it might be associated with. I give that very poor odds indeed, and much less still that it is due to “a certain trade-off in computer science”.
At that point, it wouldn’t functionally be different than a series of tool AIs being controlled directly by a human operator. If that poses risk, then mitigations could be extrapolated to the combined-system scenario.
I’m not trying to imply there is something about the human mind specifically that forces a limit to computing power, I just used that as a benchmark as that is the only frame of reference that we have. If it is dumber or slightly smarter than a human on the same order of magnitude, that doesn’t really matter.
The concept of a trade-off is simply saying that the more complex a system is to imitate consciousness, the more computational ability is sacrificed, tending towards some lower bound of computational substrate that one may not count as superintelligent. I’m not saying I have any physical or informational-theoretical law in mind for that currently, though.