Reducing suffering is a good goal, but what you’re talking about, in that case, is not saving the world, but improving it. It’s not just a matter of semantics; it’s a critically different perspective.
On the other hand, you also mention the possibility of humanity destroying ourselves. This is certainly something that we can rightly speak of “saving” the world from. But notice that this is a different concern than the “reducing suffering” one!
When you ask “What do we have to do to [accomplish goal X]?”, you have to be quite clear on what, precisely, goal X is.
The two goals that you mention can (and likely do!) have very different optimal approaches/strategies. It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another. If so, you may have to prioritize—at the very least.
“Save the world” sounds punchy, memorable, inspiring. But it’s not a great frame for thinking practically about the problem, which is quite difficult enough to demand the greatest rigor. With problems of this magnitude, errors compound and blossom into catastrophes. Precision is everything.
Probably should have made it clearer that I was inviting debate on that specific angle that you just brought up. I was trying to limit my bias by not being the first person to answer my own question. You’re right about the framing of the problem being problematic.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They’re almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn’t naturally occur. And that life obviously will contain large amounts of suffering. People don’t like hearing that, especially in the x-risk reduction demographic, but it’s pretty clear the goals are at odds.
Since I’m a non-altruist, there’s not really any reason to care about most of that future suffering (assuming I’ll be dead by then), but there’s not really any reason to care about saving humanity from extinction, either.
There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity’s survival. But none of these make the goals point in the same direction.
Reducing suffering is a good goal, but what you’re talking about, in that case, is not saving the world, but improving it. It’s not just a matter of semantics; it’s a critically different perspective.
On the other hand, you also mention the possibility of humanity destroying ourselves. This is certainly something that we can rightly speak of “saving” the world from. But notice that this is a different concern than the “reducing suffering” one!
When you ask “What do we have to do to [accomplish goal X]?”, you have to be quite clear on what, precisely, goal X is.
The two goals that you mention can (and likely do!) have very different optimal approaches/strategies. It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another. If so, you may have to prioritize—at the very least.
“Save the world” sounds punchy, memorable, inspiring. But it’s not a great frame for thinking practically about the problem, which is quite difficult enough to demand the greatest rigor. With problems of this magnitude, errors compound and blossom into catastrophes. Precision is everything.
Probably should have made it clearer that I was inviting debate on that specific angle that you just brought up. I was trying to limit my bias by not being the first person to answer my own question. You’re right about the framing of the problem being problematic.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They’re almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn’t naturally occur. And that life obviously will contain large amounts of suffering. People don’t like hearing that, especially in the x-risk reduction demographic, but it’s pretty clear the goals are at odds.
Since I’m a non-altruist, there’s not really any reason to care about most of that future suffering (assuming I’ll be dead by then), but there’s not really any reason to care about saving humanity from extinction, either.
There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity’s survival. But none of these make the goals point in the same direction.