I am glad you wrote this, as I have been spending some time wondering about this possibility space.
One more option: an AI can have a utility function where it seeks to max its time alive, and have enough cognition to think it is likely to die regardless when humans decide it is dangerous. Even if they think they cannot win, they might seek to cause chaos that increases their total time to live.
I am glad you wrote this, as I have been spending some time wondering about this possibility space.
One more option: an AI can have a utility function where it seeks to max its time alive, and have enough cognition to think it is likely to die regardless when humans decide it is dangerous. Even if they think they cannot win, they might seek to cause chaos that increases their total time to live.