There is a good reminder at the beginning that existential risk is about what happens eventually, not about scale of catastrophes. So for example a synthetic pandemic that kills 99% of the population doesn’t by itself fall under existential risk, since all else equal recovery is possible, and it’s unclear how the precedent of such catastrophe moves existential risk going forward. But a permanent AI-tool-enforced anti-superintelligence regime without any catastrophe does fit the concern of existential risk.
The alternative to the position of high probability of AI killing everyone for which there exist plausible arguments involves AI sparing everyone. This is still an example of existential risk, since humanity doesn’t get the future, we only get its tiny corner the superintelligences allocate to our welfare. In this technical sense of existential risk, it’s coherent to hold the position where simultaneously the chance of existential risk doom is 90%, while the chance of human extinction is only 30%.
There is a good reminder at the beginning that existential risk is about what happens eventually, not about scale of catastrophes. So for example a synthetic pandemic that kills 99% of the population doesn’t by itself fall under existential risk, since all else equal recovery is possible, and it’s unclear how the precedent of such catastrophe moves existential risk going forward. But a permanent AI-tool-enforced anti-superintelligence regime without any catastrophe does fit the concern of existential risk.
The alternative to the position of high probability of AI killing everyone for which there exist plausible arguments involves AI sparing everyone. This is still an example of existential risk, since humanity doesn’t get the future, we only get its tiny corner the superintelligences allocate to our welfare. In this technical sense of existential risk, it’s coherent to hold the position where simultaneously the chance of existential risk doom is 90%, while the chance of human extinction is only 30%.