I agree it seems plausible that AIs could boost takeover success probability (and holding on to that victory through the first several months) by more than 0.1% by killing a large fraction of humans.
Though on the other hand, the AI might also need to keep some humans loyal early during takeover, to e.g. do some physical tasks that it doesn’t have great robot control over. And mass-killing isn’t necessarily super easy, either; and attempts in that direction could raise a lot of extra opposition. So it’s not clear where the pragmatics point.
(Main thing I was reacting to in my above comment was Steven’s scenario where the AI already has many copies across the solar system, already has robot armies, and is contemplating how to send firmware updates. I.e. it seemed more like a scenario of “holding on in the long-term” than “how to initially establish control and survive”. Where I feel like the surveillance scenarios are probably stable.)
I agree it seems plausible that AIs could boost takeover success probability (and holding on to that victory through the first several months) by more than 0.1% by killing a large fraction of humans.
Though on the other hand, the AI might also need to keep some humans loyal early during takeover, to e.g. do some physical tasks that it doesn’t have great robot control over. And mass-killing isn’t necessarily super easy, either; and attempts in that direction could raise a lot of extra opposition. So it’s not clear where the pragmatics point.
(Main thing I was reacting to in my above comment was Steven’s scenario where the AI already has many copies across the solar system, already has robot armies, and is contemplating how to send firmware updates. I.e. it seemed more like a scenario of “holding on in the long-term” than “how to initially establish control and survive”. Where I feel like the surveillance scenarios are probably stable.)