Stretching the definition to include anything suboptimal is the most ambitious stretch I’ve seen so far. It would include literally everything that’s wrong, or can ever be wrong, in the world. Good luck fixing that.
On a more serious note, this post is about existential risk as defined by eg Ord. Anything beyond that (and there’s a lot!) is out of scope.
Not everything suboptimal, but suboptimal in a way that causes suffering on an astronomical scale (e.g. galactic dystopia, or dystopia that lasts for thousands of years, or dystopia with an extreme number of moral patients (e.g. uploads)). I’m not sure what you mean by Ord, but I think it’s reasonable to have a significant probability of S-risk from a Christiano-like failure.
Stretching the definition to include anything suboptimal is the most ambitious stretch I’ve seen so far. It would include literally everything that’s wrong, or can ever be wrong, in the world. Good luck fixing that.
On a more serious note, this post is about existential risk as defined by eg Ord. Anything beyond that (and there’s a lot!) is out of scope.
Not everything suboptimal, but suboptimal in a way that causes suffering on an astronomical scale (e.g. galactic dystopia, or dystopia that lasts for thousands of years, or dystopia with an extreme number of moral patients (e.g. uploads)).
I’m not sure what you mean by Ord, but I think it’s reasonable to have a significant probability of S-risk from a Christiano-like failure.