If the amplified human could take over the world but hasn’t because he’s not evil, and predicts that this other AI system would do such evil, yes.
It’s plausible, though, that the decision theory used by the new AI would tell it to act predictably non-evilly, in order to make the amplified human see this coming and not destroy the new AI before it’s turned on.
Note that this amplified human has thereby already taken over the world in all but name.
Ok, so should amplified human try to turn off first the agent-foundation-based project which is going to turn off this human if complete?
If the amplified human could take over the world but hasn’t because he’s not evil, and predicts that this other AI system would do such evil, yes.
It’s plausible, though, that the decision theory used by the new AI would tell it to act predictably non-evilly, in order to make the amplified human see this coming and not destroy the new AI before it’s turned on.
Note that this amplified human has thereby already taken over the world in all but name.