Hm.. It occurs to me that AI itself does not have to be capable of winning a nuclear war. The leaders just have to be convinced they have enough of a decisive advantage to start it.
More broadly, an AI only needs to think that starting a nuclear war has higher expected utility than not starting it.
E.g. if an AI thinks it is about to be destroyed by default, but that starting a nuclear war (which it expects to lose) will distract its enemies and maybe give it the chance to survive and continue pursuing its objectives, then the nuclear war may be the better bet. (I discuss this kind of thing in “Disjunctive Scenarios of Catastrophic AI Risk”.)
Not more broadly, different class. I’m thinking of, like, witch doctors making warriors bulletproof. If they believe its power will protect them, then breaking MAD becomes an option.
The AI in this scenario doesn’t need to think at all. It could actually just be a magic 8 ball.
Hm.. It occurs to me that AI itself does not have to be capable of winning a nuclear war. The leaders just have to be convinced they have enough of a decisive advantage to start it.
More broadly, an AI only needs to think that starting a nuclear war has higher expected utility than not starting it.
E.g. if an AI thinks it is about to be destroyed by default, but that starting a nuclear war (which it expects to lose) will distract its enemies and maybe give it the chance to survive and continue pursuing its objectives, then the nuclear war may be the better bet. (I discuss this kind of thing in “Disjunctive Scenarios of Catastrophic AI Risk”.)
Not more broadly, different class. I’m thinking of, like, witch doctors making warriors bulletproof. If they believe its power will protect them, then breaking MAD becomes an option.
The AI in this scenario doesn’t need to think at all. It could actually just be a magic 8 ball.
Ah, right, that’s indeed a different class. I guess I was too happy to pattern-match someone else’s thought to my great idea. :-)