(To be clear, I think there is a substantial chance of at least 1 billion people dying and that AI takeover is very bad from a longtermist perspective.)
Is there a writeup somewhere of how we’re likely to get “around a billion people die” that isn’t extinction, or close to it? Something about this phrasing feels weird/suspicious to me.
Like I have a few different stories for everyone dying (some sooner, or later).
I have some stories where like “almost 8 billion people” die and the AI scans the remainder.
I have some stories where the AI doesn’t really succeed and maybe kills millions of people, in what is more like “a major industrial accident” than “a powerful superintelligence enacting its goals”.
Technically “substantial chance of at least 1 billion people dying” can imply the middle option there, but it sounds like you mean the central example to be closer to a billion than 7.9 billion or whatever. That feels like a narrow target and I don’t really know what you have in mind.
Thinking a bit more, scenarios that seem at least kinda plausible:
“misuse” where someone is just actively trying to use AI to commit genocide or similar. Or, we get into an humans+AI vs human+AI war.
the AI economy takes off, it has lots of extreme environmental impact, and it’s sort of aligned but we’re not very good at regulating it fast enough, but, we get it under control after a billion death.
The AI kills a huge number of people with a bioweapon to destablize the world and relatively advantage its position.
Massive world war/nuclear war. This could kill 100s of millions easily. 1 billion is probably a bit on the higher end of what you’d expect.
The AI has control of some nations, but thinks that some subset of humans over which it has control pose a net risk such that mass slaughter is a good option.
AIs would prefer to keep humans alive, but there are multiple misaligned AI factions racing and this causes extreme environmental damage.
Technically “substantial chance of at least 1 billion people dying” can imply the middle option there, but it sounds like you mean the central example to be closer to a billion than 7.9 billion or whatever. That feels like a narrow target and I don’t really know what you have in mind.
I think “crazy large scale conflict (with WMDs)” or “mass slaughter to marginally increase odds of retaining control” or “extreme environmental issues” are all pretty central in what I’m imagining.
I think the number of deaths for these is maybe log normally distributed around 1 billion or so. That said, I’m low confidence.
(For reference, if the same fraction of people died as in WW2, it would be around 300 million. So, my view is similar to “substantial chance of a catastrophe which is a decent amount worse than WW2”.)
Is there a writeup somewhere of how we’re likely to get “around a billion people die” that isn’t extinction, or close to it? Something about this phrasing feels weird/suspicious to me.
Like I have a few different stories for everyone dying (some sooner, or later).
I have some stories where like “almost 8 billion people” die and the AI scans the remainder.
I have some stories where the AI doesn’t really succeed and maybe kills millions of people, in what is more like “a major industrial accident” than “a powerful superintelligence enacting its goals”.
Technically “substantial chance of at least 1 billion people dying” can imply the middle option there, but it sounds like you mean the central example to be closer to a billion than 7.9 billion or whatever. That feels like a narrow target and I don’t really know what you have in mind.
Thinking a bit more, scenarios that seem at least kinda plausible:
“misuse” where someone is just actively trying to use AI to commit genocide or similar. Or, we get into an humans+AI vs human+AI war.
the AI economy takes off, it has lots of extreme environmental impact, and it’s sort of aligned but we’re not very good at regulating it fast enough, but, we get it under control after a billion death.
Some more:
The AI kills a huge number of people with a bioweapon to destablize the world and relatively advantage its position.
Massive world war/nuclear war. This could kill 100s of millions easily. 1 billion is probably a bit on the higher end of what you’d expect.
The AI has control of some nations, but thinks that some subset of humans over which it has control pose a net risk such that mass slaughter is a good option.
AIs would prefer to keep humans alive, but there are multiple misaligned AI factions racing and this causes extreme environmental damage.
I think “crazy large scale conflict (with WMDs)” or “mass slaughter to marginally increase odds of retaining control” or “extreme environmental issues” are all pretty central in what I’m imagining.
I think the number of deaths for these is maybe log normally distributed around 1 billion or so. That said, I’m low confidence.
(For reference, if the same fraction of people died as in WW2, it would be around 300 million. So, my view is similar to “substantial chance of a catastrophe which is a decent amount worse than WW2”.)