I think the reason why cousin_it’s comment is upvoted so much is that a lot of people (including me) weren’t really aware of S-risks or how bad they could be. It’s one thing to just make a throwaway line that S-risks could be worse, but it’s another thing entirely to put together a convincing argument.
Similar ideas have been in other articles, but they’ve framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don’t think I saw the links for either of those articles before, but if I had, I probably wouldn’t have read them.
I also think that the title helps as well. S-risks is a catchy name, especially if you already know x-risks. I know that this term has been used before, but it wasn’t used in the title. Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.
I think there’s definitely an important lesson to be drawn here. I wonder how many other articles have gotten close to an important truth, but just failed to hit it out fo the park for some reason or another.
Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.
I’m only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the “Reducing risks of astronomical suffering” article in a deliberately ‘balanced’ way, pointing out the different perspectives. This is why it didn’t come away making any very strong claims. I don’t find the energy-efficiency point convincing at all, but for those who do, x-risks are likely (though not with very high confidence) still more important, mainly because more futures will be optimized for good outcomes rather than bad outcomes, and this is where most of the value is likely to come from. The “pit” around the FAI-peak is in expectation extremely bad compared to anything that exists currently, but most of it is just accidental suffering that is still comparatively unoptimized. So in the end, whether s-risks or x-risks are more important to work on on the margin depends on how suffering-focused or not someone’s values are.
Having said that, I totally agree that more people should be concerned about s-risks and it’s concerning that the article (and the one on suffering-focused AI safety) didn’t manage to convey this point well.
I think the reason why cousin_it’s comment is upvoted so much is that a lot of people (including me) weren’t really aware of S-risks or how bad they could be. It’s one thing to just make a throwaway line that S-risks could be worse, but it’s another thing entirely to put together a convincing argument.
Similar ideas have been in other articles, but they’ve framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don’t think I saw the links for either of those articles before, but if I had, I probably wouldn’t have read them.
I also think that the title helps as well. S-risks is a catchy name, especially if you already know x-risks. I know that this term has been used before, but it wasn’t used in the title. Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.
I think there’s definitely an important lesson to be drawn here. I wonder how many other articles have gotten close to an important truth, but just failed to hit it out fo the park for some reason or another.
Interesting!
I’m only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the “Reducing risks of astronomical suffering” article in a deliberately ‘balanced’ way, pointing out the different perspectives. This is why it didn’t come away making any very strong claims. I don’t find the energy-efficiency point convincing at all, but for those who do, x-risks are likely (though not with very high confidence) still more important, mainly because more futures will be optimized for good outcomes rather than bad outcomes, and this is where most of the value is likely to come from. The “pit” around the FAI-peak is in expectation extremely bad compared to anything that exists currently, but most of it is just accidental suffering that is still comparatively unoptimized. So in the end, whether s-risks or x-risks are more important to work on on the margin depends on how suffering-focused or not someone’s values are.
Having said that, I totally agree that more people should be concerned about s-risks and it’s concerning that the article (and the one on suffering-focused AI safety) didn’t manage to convey this point well.