I sure hope Effective Altruism’s Ultimate Goal is not to Eradicate Human Suffering. Because there is a way to achieve that goal that available to humanity as-is but it’s awful. Just need to make sure that there are no humans.
I understand that’s not what you describe here (and I don’t think that’s a solution you’d endorse). But… I think it’s important to avoid committing to wrong goals.
For what it’s worth the sort of naive failing you describe is the version of the repugnant conclusion for negative utilitarianism. Negative preference utilitarianism addresses this, analogous to the way the repugnant conclusion of (positive) utilitarianism can be addressed by various means, although it is by no means the only option. That said Will doesn’t really address this in the post, so I’m not quite sure what he has in mind, if anything, in terms of formal population ethical reasoning.
I sure hope Effective Altruism’s Ultimate Goal is not to Eradicate Human Suffering. Because there is a way to achieve that goal that available to humanity as-is but it’s awful. Just need to make sure that there are no humans.
I understand that’s not what you describe here (and I don’t think that’s a solution you’d endorse). But… I think it’s important to avoid committing to wrong goals.
For what it’s worth the sort of naive failing you describe is the version of the repugnant conclusion for negative utilitarianism. Negative preference utilitarianism addresses this, analogous to the way the repugnant conclusion of (positive) utilitarianism can be addressed by various means, although it is by no means the only option. That said Will doesn’t really address this in the post, so I’m not quite sure what he has in mind, if anything, in terms of formal population ethical reasoning.