Failure would presumably occur before we get to the stage of “robot army can defeat unified humanity”—failure should happen soon after it becomes possible, and there are easier ways to fail than to win a clean war. Emphasizing this may give people the wrong idea, since it makes unity and stability seem like a solution rather than a stopgap. But emphasizing the robot army seems to have a similar problem—it doesn’t really matter whether there is a literal robot army, you are in trouble anyway.
I agree other powerful tools can achieve the same outcome, and since in practice humanity isn’t unified rogue AI could act earlier, but either way you get to AI controlling the means of coercive force, which helps people to understand the end-state reached.
It’s good to both understand the events by which one is shifted into the bad trajectory, and to be clear on what the trajectory is. It sounds like your focus on the former may have interfered with the latter.
I do agree there was a miscommunication about the end state, and that language like “lots of obvious destruction” is an understatement.
I do still endorse “military leaders might issue an order and find it is ignored” (or total collapse of society) as basically accurate and not an understatement.
I agree other powerful tools can achieve the same outcome, and since in practice humanity isn’t unified rogue AI could act earlier, but either way you get to AI controlling the means of coercive force, which helps people to understand the end-state reached.
It’s good to both understand the events by which one is shifted into the bad trajectory, and to be clear on what the trajectory is. It sounds like your focus on the former may have interfered with the latter.
I do agree there was a miscommunication about the end state, and that language like “lots of obvious destruction” is an understatement.
I do still endorse “military leaders might issue an order and find it is ignored” (or total collapse of society) as basically accurate and not an understatement.