I think you miss one important existential risk separate from extinction, which is having a lastingly suboptimal society. Like, systematic institutional inefficiency, and being unable to change anything because of disempowerment. In that scenario, maybe humanity is still around because one of the things we can measure and optimize for is making sure a minimum amount of humans are alive, but the living conditions are undesirable.
Stretching the definition to include anything suboptimal is the most ambitious stretch I’ve seen so far. It would include literally everything that’s wrong, or can ever be wrong, in the world. Good luck fixing that.
On a more serious note, this post is about existential risk as defined by eg Ord. Anything beyond that (and there’s a lot!) is out of scope.
Not everything suboptimal, but suboptimal in a way that causes suffering on an astronomical scale (e.g. galactic dystopia, or dystopia that lasts for thousands of years, or dystopia with an extreme number of moral patients (e.g. uploads)). I’m not sure what you mean by Ord, but I think it’s reasonable to have a significant probability of S-risk from a Christiano-like failure.
I think you miss one important existential risk separate from extinction, which is having a lastingly suboptimal society. Like, systematic institutional inefficiency, and being unable to change anything because of disempowerment.
In that scenario, maybe humanity is still around because one of the things we can measure and optimize for is making sure a minimum amount of humans are alive, but the living conditions are undesirable.
Stretching the definition to include anything suboptimal is the most ambitious stretch I’ve seen so far. It would include literally everything that’s wrong, or can ever be wrong, in the world. Good luck fixing that.
On a more serious note, this post is about existential risk as defined by eg Ord. Anything beyond that (and there’s a lot!) is out of scope.
Not everything suboptimal, but suboptimal in a way that causes suffering on an astronomical scale (e.g. galactic dystopia, or dystopia that lasts for thousands of years, or dystopia with an extreme number of moral patients (e.g. uploads)).
I’m not sure what you mean by Ord, but I think it’s reasonable to have a significant probability of S-risk from a Christiano-like failure.