The current state of things, where people suffer wwhen they don’t have to due to circumstances outside of their control. Just because the world is the product of seven billion (largely) uncoordinated people and untold dead doesn’t mean that we have the excuse that seven billion people (or probably fewer) can’t fix “the way things are.” While I concede that we aren’t permanent fixtures on the planet, I am sufficiently disturbed by the idea of our version of humanity being one of many possible versions that destroys itself out of shortsightedness that I am willing to embark on any plan with a reasonable chance of working (and a suite of backup plans) with all of the resources that may be mustered by the means available to us.
Reducing suffering is a good goal, but what you’re talking about, in that case, is not saving the world, but improving it. It’s not just a matter of semantics; it’s a critically different perspective.
On the other hand, you also mention the possibility of humanity destroying ourselves. This is certainly something that we can rightly speak of “saving” the world from. But notice that this is a different concern than the “reducing suffering” one!
When you ask “What do we have to do to [accomplish goal X]?”, you have to be quite clear on what, precisely, goal X is.
The two goals that you mention can (and likely do!) have very different optimal approaches/strategies. It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another. If so, you may have to prioritize—at the very least.
“Save the world” sounds punchy, memorable, inspiring. But it’s not a great frame for thinking practically about the problem, which is quite difficult enough to demand the greatest rigor. With problems of this magnitude, errors compound and blossom into catastrophes. Precision is everything.
Probably should have made it clearer that I was inviting debate on that specific angle that you just brought up. I was trying to limit my bias by not being the first person to answer my own question. You’re right about the framing of the problem being problematic.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They’re almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn’t naturally occur. And that life obviously will contain large amounts of suffering. People don’t like hearing that, especially in the x-risk reduction demographic, but it’s pretty clear the goals are at odds.
Since I’m a non-altruist, there’s not really any reason to care about most of that future suffering (assuming I’ll be dead by then), but there’s not really any reason to care about saving humanity from extinction, either.
There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity’s survival. But none of these make the goals point in the same direction.
The current state of things, where people suffer when they don’t have to due to circumstances outside of their control.
Ah, I can very much relate to that sentiment! The Effective Altruism movement was spawned largely in response to the concerns like that. Have you looked into their agenda, methods and achievements?
Why do you think the world needs saving and from what?
The current state of things, where people suffer wwhen they don’t have to due to circumstances outside of their control. Just because the world is the product of seven billion (largely) uncoordinated people and untold dead doesn’t mean that we have the excuse that seven billion people (or probably fewer) can’t fix “the way things are.” While I concede that we aren’t permanent fixtures on the planet, I am sufficiently disturbed by the idea of our version of humanity being one of many possible versions that destroys itself out of shortsightedness that I am willing to embark on any plan with a reasonable chance of working (and a suite of backup plans) with all of the resources that may be mustered by the means available to us.
Reducing suffering is a good goal, but what you’re talking about, in that case, is not saving the world, but improving it. It’s not just a matter of semantics; it’s a critically different perspective.
On the other hand, you also mention the possibility of humanity destroying ourselves. This is certainly something that we can rightly speak of “saving” the world from. But notice that this is a different concern than the “reducing suffering” one!
When you ask “What do we have to do to [accomplish goal X]?”, you have to be quite clear on what, precisely, goal X is.
The two goals that you mention can (and likely do!) have very different optimal approaches/strategies. It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another. If so, you may have to prioritize—at the very least.
“Save the world” sounds punchy, memorable, inspiring. But it’s not a great frame for thinking practically about the problem, which is quite difficult enough to demand the greatest rigor. With problems of this magnitude, errors compound and blossom into catastrophes. Precision is everything.
Probably should have made it clearer that I was inviting debate on that specific angle that you just brought up. I was trying to limit my bias by not being the first person to answer my own question. You’re right about the framing of the problem being problematic.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They’re almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn’t naturally occur. And that life obviously will contain large amounts of suffering. People don’t like hearing that, especially in the x-risk reduction demographic, but it’s pretty clear the goals are at odds.
Since I’m a non-altruist, there’s not really any reason to care about most of that future suffering (assuming I’ll be dead by then), but there’s not really any reason to care about saving humanity from extinction, either.
There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity’s survival. But none of these make the goals point in the same direction.
Ah, I can very much relate to that sentiment! The Effective Altruism movement was spawned largely in response to the concerns like that. Have you looked into their agenda, methods and achievements?