You are right that the same capabilities enable a pivotal act. My concern is that they won’t be used for one (where pivotal act is defined as a good act).
Having thought about it some more, I think the biggest problem in the multipolar, human-controlled RSI-capable AGI scenario is that it tends to be the worst actor that defects first and controls the future.
More ethical humans will tend to be more timid with committing or risking mass destruction to achieve their ends, so they’ll tend to hold off on aggressive moves that could win.
“Hide and create a superbrain and a robot army” are not the first things a good person tells their AGI to do, let alone inducing nuclear strikes that increase one’s odds of winning at great cost. Someone with more selfish designs on the future may have much less trouble issuing those orders.
Just to elaborate a little:
You are right that the same capabilities enable a pivotal act. My concern is that they won’t be used for one (where pivotal act is defined as a good act).
Having thought about it some more, I think the biggest problem in the multipolar, human-controlled RSI-capable AGI scenario is that it tends to be the worst actor that defects first and controls the future.
More ethical humans will tend to be more timid with committing or risking mass destruction to achieve their ends, so they’ll tend to hold off on aggressive moves that could win.
“Hide and create a superbrain and a robot army” are not the first things a good person tells their AGI to do, let alone inducing nuclear strikes that increase one’s odds of winning at great cost. Someone with more selfish designs on the future may have much less trouble issuing those orders.