Thinking a bit more, scenarios that seem at least kinda plausible:
“misuse” where someone is just actively trying to use AI to commit genocide or similar. Or, we get into an humans+AI vs human+AI war.
the AI economy takes off, it has lots of extreme environmental impact, and it’s sort of aligned but we’re not very good at regulating it fast enough, but, we get it under control after a billion death.
The AI kills a huge number of people with a bioweapon to destablize the world and relatively advantage its position.
Massive world war/nuclear war. This could kill 100s of millions easily. 1 billion is probably a bit on the higher end of what you’d expect.
The AI has control of some nations, but thinks that some subset of humans over which it has control pose a net risk such that mass slaughter is a good option.
AIs would prefer to keep humans alive, but there are multiple misaligned AI factions racing and this causes extreme environmental damage.
Thinking a bit more, scenarios that seem at least kinda plausible:
“misuse” where someone is just actively trying to use AI to commit genocide or similar. Or, we get into an humans+AI vs human+AI war.
the AI economy takes off, it has lots of extreme environmental impact, and it’s sort of aligned but we’re not very good at regulating it fast enough, but, we get it under control after a billion death.
Some more:
The AI kills a huge number of people with a bioweapon to destablize the world and relatively advantage its position.
Massive world war/nuclear war. This could kill 100s of millions easily. 1 billion is probably a bit on the higher end of what you’d expect.
The AI has control of some nations, but thinks that some subset of humans over which it has control pose a net risk such that mass slaughter is a good option.
AIs would prefer to keep humans alive, but there are multiple misaligned AI factions racing and this causes extreme environmental damage.