One of the first priorities of an AI in a takeoff would be to disable other projects which might generate AGIs. A weakly superintelligent hacker AGI might be able to pull this off before it could destroy the world. Also, fast takeoff could be less than months by some people’s guess.
And what do you think happens when the second AGI wins, then maximizes the universe for “the other AI was defeated”. Some serious unintended consequences, even if you could specify it well.
One of the first priorities of an AI in a takeoff would be to disable other projects which might generate AGIs. A weakly superintelligent hacker AGI might be able to pull this off before it could destroy the world. Also, fast takeoff could be less than months by some people’s guess.
And what do you think happens when the second AGI wins, then maximizes the universe for “the other AI was defeated”. Some serious unintended consequences, even if you could specify it well.