Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.
I’m only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the “Reducing risks of astronomical suffering” article in a deliberately ‘balanced’ way, pointing out the different perspectives. This is why it didn’t come away making any very strong claims. I don’t find the energy-efficiency point convincing at all, but for those who do, x-risks are likely (though not with very high confidence) still more important, mainly because more futures will be optimized for good outcomes rather than bad outcomes, and this is where most of the value is likely to come from. The “pit” around the FAI-peak is in expectation extremely bad compared to anything that exists currently, but most of it is just accidental suffering that is still comparatively unoptimized. So in the end, whether s-risks or x-risks are more important to work on on the margin depends on how suffering-focused or not someone’s values are.
Having said that, I totally agree that more people should be concerned about s-risks and it’s concerning that the article (and the one on suffering-focused AI safety) didn’t manage to convey this point well.
Interesting!
I’m only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the “Reducing risks of astronomical suffering” article in a deliberately ‘balanced’ way, pointing out the different perspectives. This is why it didn’t come away making any very strong claims. I don’t find the energy-efficiency point convincing at all, but for those who do, x-risks are likely (though not with very high confidence) still more important, mainly because more futures will be optimized for good outcomes rather than bad outcomes, and this is where most of the value is likely to come from. The “pit” around the FAI-peak is in expectation extremely bad compared to anything that exists currently, but most of it is just accidental suffering that is still comparatively unoptimized. So in the end, whether s-risks or x-risks are more important to work on on the margin depends on how suffering-focused or not someone’s values are.
Having said that, I totally agree that more people should be concerned about s-risks and it’s concerning that the article (and the one on suffering-focused AI safety) didn’t manage to convey this point well.