The main reason most people donate to charities may be to signal status to others, or to “purchase warm fuzzies” (a form of status signalling to one’s own ego).
Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and “rationality” are well received, and/or similarly a way to “purchase warm fuzzies” for somebody wishing to maintain a self-image of utilitarian/”rationalist”.
To this end, effective altruism doesn’t have to be actually effective, it could just superficially pretend to be.
Yes, I think this there are people for whom this is true. However, the best way to get such people to actually do good is to make “pretending to actually do good” and “actually doing good” equivalently costly, by calling them out when they do the latter (EDIT: former).
I personally want effective altruism to actually do good, not just satisfy people’s social desires (though as Diego points out, this is also important). If it turns out that the point of the EA movement becomes to help people signal to a particular consequentialist set, then my hypothetical apostasy will become an actual apostasy, so I’m still going to list this as a critique.
Individual donors can’t plausibly estimate the expected marginal QUALYs/$ of charities, they have to rely on meta-charities like GiveWell. But how you estimate the performance of GiveWell? Given that estimation is costly, GiveWell has no incentive to become any better, it actually has an incentive to become worse. Even if the people currently running GiveWell are honest and competent, they might fall victim to greed or self-serving biases that could make them overestimate their own performance, especially since they lack any independent form of evaluation or model to compare with. Or the honest and competent people could be replaced by less honest and less competent people. Or GiveWell as a whole could be driven out of business and replaced by a competitor that spends less on estimation quality and more on PR. The whole industry has a real possibility of becoming a Market for Lemons.
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we’re going to get (EDIT: not actually—as Carl Shulman points out, more competitors would be better, but without a lot of extra effort, it’s hard to beat). Fundamentally, it seems like anything altruistic we do is going to have to rely on at least a few “heroic” people who are responding to a desire to actually do good rather than social signalling.
Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I’d like to hear your others (by PM if you like).
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we’re going to get
I think it would be better with more competitors in the same space keeping each other honest.
However, the best way to get such people to actually do good is to make “pretending to actually do good” and “actually doing good” equivalently costly, by calling them out when they do the latter.
I’m not sure what you mean by the last clause. Do you mean “calling them out when they do the former”? Or do you mean “making the primary way to pretend to actually do good such that it actually does good”?
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
This is nice to hear. Still, you have to trust them to report their own shortcomings accurately. And if more and more people join EA for status reasons, GiveWell and related organizations may become less incentivized to achieve high performance.
Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I’d like to hear your others (by PM if you like).
Mostly these are the reasons I can think of. Maybe I could also add that donations to people in impoverished communities might create market distortions with difficult to asses results, but I suppose that this could be lumped in the estimation difficulties category of objections.
Yes, I think this there are people for whom this is true. However, the best way to get such people to actually do good is to make “pretending to actually do good” and “actually doing good” equivalently costly, by calling them out when they do the latter (EDIT: former).
I personally want effective altruism to actually do good, not just satisfy people’s social desires (though as Diego points out, this is also important). If it turns out that the point of the EA movement becomes to help people signal to a particular consequentialist set, then my hypothetical apostasy will become an actual apostasy, so I’m still going to list this as a critique.
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we’re going to get (EDIT: not actually—as Carl Shulman points out, more competitors would be better, but without a lot of extra effort, it’s hard to beat). Fundamentally, it seems like anything altruistic we do is going to have to rely on at least a few “heroic” people who are responding to a desire to actually do good rather than social signalling.
Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I’d like to hear your others (by PM if you like).
I think it would be better with more competitors in the same space keeping each other honest.
Ah, good point. Weakened.
Not necessarily, a lot of competitors might result in competition on providing plausible fuzzes rather than honesty.
I’m not sure what you mean by the last clause. Do you mean “calling them out when they do the former”? Or do you mean “making the primary way to pretend to actually do good such that it actually does good”?
I meant “former”. Sorry for the confusion.
This is nice to hear. Still, you have to trust them to report their own shortcomings accurately. And if more and more people join EA for status reasons, GiveWell and related organizations may become less incentivized to achieve high performance.
Mostly these are the reasons I can think of. Maybe I could also add that donations to people in impoverished communities might create market distortions with difficult to asses results, but I suppose that this could be lumped in the estimation difficulties category of objections.