“LW is a community of people who mostly share the idea that AI is the main existential risk.”
In the survey results there is no agreement that AI is the most likely risk of killing 90%+ of the human population by 2100. There might plausibly be agreement that a bad outcome from AI is the most likely existential risk, on the theory that survivors can recover from the collapse of civilization from nukes or bioweapons eventually, though.
“LW is a community of people who mostly share the idea that AI is the main existential risk.”
In the survey results there is no agreement that AI is the most likely risk of killing 90%+ of the human population by 2100. There might plausibly be agreement that a bad outcome from AI is the most likely existential risk, on the theory that survivors can recover from the collapse of civilization from nukes or bioweapons eventually, though.