There don’t seem to be many plausible paths to s-risks: by default, we shouldn’t expect them, because it would be quite surprising for an amoral AI system to think it was particularly useful or good for humans to _suffer_, as opposed to not exist at all, and there doesn’t seem to be much reason to expect an immoral AI system.
I think this is probably false, but it’s because I’m using the strict definition of s-risk.
I expect to the extent that there’s any human-like stuff, or animal-like stuff in the future, the fact that there will also be so much more computation available implies that even proportionally small risks of suffering add up to greater aggregates of suffering than currently exist on Earth.
If 0.01% of an intergalactic civilization’s resources were being used to host suffering programs, such as nature simulations, or extremely realistic video games, then this would certainly qualify as an s-risk, via the definition given here, “S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”
If you define s-risks as situations where proportionally large amounts of computation are focused on creating suffering, then I would agree with you. However, s-risks could still maybe be important because they could be unusually tractable. One reason might be that having even just a very small group of people who strongly don’t want suffering to exist would successfully lobby society’s weak preference to have proportionally small amounts of suffering. Suffering might be unique among values in this respect because there might be other places where people would want to fight you more.
I think this is probably false, but it’s because I’m using the strict definition of s-risk.
I expect to the extent that there’s any human-like stuff, or animal-like stuff in the future, the fact that there will also be so much more computation available implies that even proportionally small risks of suffering add up to greater aggregates of suffering than currently exist on Earth.
If 0.01% of an intergalactic civilization’s resources were being used to host suffering programs, such as nature simulations, or extremely realistic video games, then this would certainly qualify as an s-risk, via the definition given here, “S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”
If you define s-risks as situations where proportionally large amounts of computation are focused on creating suffering, then I would agree with you. However, s-risks could still maybe be important because they could be unusually tractable. One reason might be that having even just a very small group of people who strongly don’t want suffering to exist would successfully lobby society’s weak preference to have proportionally small amounts of suffering. Suffering might be unique among values in this respect because there might be other places where people would want to fight you more.
Yeah, I should have said something like “the biggest kinds of s-risks where there is widespread optimization for suffering”.