(note: I don’t identify as Utilitarian, so discount my answer as appropriate)
You can split the question into multiple parts:
1) should I be an altruist, who gives up resources to benefit others more than myself?
2) if so, what does “benefit” actually mean for others?
3) How can I best achieve my desires, as defined by #1 and #2?
#1 is probably not answerable using only logic—this is up to you and your preferred framework for morals and decision-making.
#2 gets to the title of your post (though the content ranges further). Do you benefit others by reducing global population? By making some existing lives more comfortable or longer (and which ones)? There’s a lot more writing on this, but no clear enough answers that it can be considered solved.
#3 is the focus of E in EA—if your goals match theirs (and if you believe their methodology for measuring), then EA helps identify the most efficient ways you can use resources for these goals.
To answer your direct question—maybe! To the extent that you’re pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.
To the extent that you care about topics they don’t, don’t. For instance, I also donate to local arts groups and city- and state-wide food charities, which I deeply understand are benefiting people who are already very lucky relative to global standards. If utility is fungible and there is declining utility for resources for any given recipient, this is not efficient. But I don’t believe those things are smooth enough curves to overwhelm my other preferences.
To the extent that you’re pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.
Well yes, this is basically the crux of my question.
As in, I obviously agree with the E and I tend do agree with the A , buy my issue is why how A seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or ‘care’ about on a personal level).
So I agree with: I should donate to some of my favorite writers/video-makers that are less popular and thus might be kept in business by 20$ monthly on pateron is another hundred people think like me. (efficient as opposed, to, say, donating to an org that helps all artists or donating to well-off creators).
I also agree with: It’s efficient to save a life halfway across the globe for x,000$ as opposed to one in the EU where it would cost x00,000$ to achieve a similar addition in healthy life years.
Where I don’t understand how the intuition really works is “Why is it better to save the life of a person you will never know/meet than to help 20 artists that you love” (or some such equivalence).
As in, I get there some intuition about it being “better” and I agree that might be strong enough in some people that it’s just “obvious”, but my thinking was that there might be some sort of better ethic-rooted argument for it.
Nope, in the end it all comes down to your personal self-conception and intuition. You can back it up with calculations and testing your emotional reaction to intellectual counterfactuals (“how does it feel that I saved half a statistical life, but couldn’t support my friend this month”). But all the moral arguments I’ve seen come down to either religious authority or assertion that some intuitions are (or should be) universal.
(note: I don’t identify as Utilitarian, so discount my answer as appropriate)
You can split the question into multiple parts:
1) should I be an altruist, who gives up resources to benefit others more than myself?
2) if so, what does “benefit” actually mean for others?
3) How can I best achieve my desires, as defined by #1 and #2?
#1 is probably not answerable using only logic—this is up to you and your preferred framework for morals and decision-making.
#2 gets to the title of your post (though the content ranges further). Do you benefit others by reducing global population? By making some existing lives more comfortable or longer (and which ones)? There’s a lot more writing on this, but no clear enough answers that it can be considered solved.
#3 is the focus of E in EA—if your goals match theirs (and if you believe their methodology for measuring), then EA helps identify the most efficient ways you can use resources for these goals.
To answer your direct question—maybe! To the extent that you’re pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.
To the extent that you care about topics they don’t, don’t. For instance, I also donate to local arts groups and city- and state-wide food charities, which I deeply understand are benefiting people who are already very lucky relative to global standards. If utility is fungible and there is declining utility for resources for any given recipient, this is not efficient. But I don’t believe those things are smooth enough curves to overwhelm my other preferences.
Well yes, this is basically the crux of my question.
As in, I obviously agree with the
E
and I tend do agree with theA
, buy my issue is why howA
seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or ‘care’ about on a personal level).So I agree with: I should donate to some of my favorite writers/video-makers that are less popular and thus might be kept in business by 20$ monthly on pateron is another hundred people think like me. (efficient as opposed, to, say, donating to an org that helps all artists or donating to well-off creators).
I also agree with: It’s efficient to save a life halfway across the globe for x,000$ as opposed to one in the EU where it would cost x00,000$ to achieve a similar addition in healthy life years.
Where I don’t understand how the intuition really works is “Why is it better to save the life of a person you will never know/meet than to help 20 artists that you love” (or some such equivalence).
As in, I get there some intuition about it being “better” and I agree that might be strong enough in some people that it’s just “obvious”, but my thinking was that there might be some sort of better ethic-rooted argument for it.
Nope, in the end it all comes down to your personal self-conception and intuition. You can back it up with calculations and testing your emotional reaction to intellectual counterfactuals (“how does it feel that I saved half a statistical life, but couldn’t support my friend this month”). But all the moral arguments I’ve seen come down to either religious authority or assertion that some intuitions are (or should be) universal.