Upon control loss, humans will placidly wait to die instead of immediately resorting to unlimited violence, defeating the AI unless it has managed to acquire enough hard power to not be destroyed.
1 : see open agency or just look at the stateless myopia in use right now. A superintelligent stateless and myopic ASI is likely completely controllable as it lacks the information needed to break free
It’s easiest to challenge your assumptions of :
loss of human control is inevitable
Upon control loss, humans will placidly wait to die instead of immediately resorting to unlimited violence, defeating the AI unless it has managed to acquire enough hard power to not be destroyed.
1 : see open agency or just look at the stateless myopia in use right now. A superintelligent stateless and myopic ASI is likely completely controllable as it lacks the information needed to break free
See nuclear weapons.