I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war with Russia, and today I discovered the Global Catastrophic Risk Institute and the Global Priorities Project, which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing “typical” or “median” EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches.
I disagree with you that I am stereotyping, but I think it’s good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment.
That 80,000 Hours post doesn’t contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim.
A lot of the post seems to confuse complex strategic moves like GiveWell’s move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
Obviously GiveWell cannot show that their interventions are the “most important thing.” But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It’s not like GiveWell is a purely informational website.
In the context of the larger EA movement, Peter Singer’s philosophy and EA pledges argue with incredible confidence that people should be giving. EA is extremely evangelical, and Singer’s philosophy is incredibly flawed and emotionally manipulative.
The problem is that none of the most common EA approaches have defeated the “null giving hypothesis” of spending your money on yourself, or saving it in an investment account and then giving the compounded amount to another cause in the future. If someone is already insisting on giving to charity, then GiveWell might redirect their money in a direction that is actually useful, but EA is also trying to get people involved who were not doing charity before, and its moral arguments and understanding of the world are just not strong enough to justify spending money on the most dominant charitable approaches.
“X is the most efficient birdfeeder on the market” is a different type of claim from “the best birdfeeder on the market is worth spending money on,” or “feeding birds is a moral imperative,” or “we should pledge to feed birds and evangelize other people to do so, too.” My impression is that EAs are getting these kinds of claims mixed up.
I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war with Russia, and today I discovered the Global Catastrophic Risk Institute and the Global Priorities Project, which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing “typical” or “median” EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches.
I disagree with you that I am stereotyping, but I think it’s good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment.
That 80,000 Hours post doesn’t contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim.
Obviously GiveWell cannot show that their interventions are the “most important thing.” But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It’s not like GiveWell is a purely informational website.
In the context of the larger EA movement, Peter Singer’s philosophy and EA pledges argue with incredible confidence that people should be giving. EA is extremely evangelical, and Singer’s philosophy is incredibly flawed and emotionally manipulative.
The problem is that none of the most common EA approaches have defeated the “null giving hypothesis” of spending your money on yourself, or saving it in an investment account and then giving the compounded amount to another cause in the future. If someone is already insisting on giving to charity, then GiveWell might redirect their money in a direction that is actually useful, but EA is also trying to get people involved who were not doing charity before, and its moral arguments and understanding of the world are just not strong enough to justify spending money on the most dominant charitable approaches.
“X is the most efficient birdfeeder on the market” is a different type of claim from “the best birdfeeder on the market is worth spending money on,” or “feeding birds is a moral imperative,” or “we should pledge to feed birds and evangelize other people to do so, too.” My impression is that EAs are getting these kinds of claims mixed up.