Now you’re telling me that a superintelligence will be able to wait in the weeds until the exact right time when it burst out of hiding and kills all of humanity all at once
One particular sub-answer is that a lot of people tend to project human time preference to AIs in a way that doesn’t actually make sense. Humans get bored and are unwilling to devote their entire lives to plans, but that’s not an immutable fact about intelligent agents. Why wouldn’t an AI be willing to wait a hundred years, or start long running robotics research programmes in pursuit of a larger goal?
One particular sub-answer is that a lot of people tend to project human time preference to AIs in a way that doesn’t actually make sense. Humans get bored and are unwilling to devote their entire lives to plans, but that’s not an immutable fact about intelligent agents. Why wouldn’t an AI be willing to wait a hundred years, or start long running robotics research programmes in pursuit of a larger goal?