A few thoughts (disclaimer: I do NOT endorse effective altruism):
The main reason most people donate to charities may be to signal status to others, or to “purchase warm fuzzies” (a form of status signalling to one’s own ego). Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and “rationality” are well received, and/or similarly a way to “purchase warm fuzzies” for somebody wishing to maintain a self-image of utilitarian/”rationalist”. To this end, effective altruism doesn’t have to be actually effective, it could just superficially pretend to be.
Effective altruism is based on a form of total utilitarianism, thus it is subject to the standard problems of this moral philosophy:
Interpersonal utility comparison: metrics such as QUALY are supposed to be interpersonally comparable proxies for utility, but they are highly debatable.
The repugnant conclusion: optimizing for cumulative QUALYs may lead to a world where the majority of the population live lives only barely worth living. Note that this isn’t merely a theoretical concern: as Carl Shulman pointed out, GiveWell’s top-ranked charities might well be already pushing in that direction.
Difficulties in distinguishing supererogatory actions from morally required actions, as your example of the person questioning their own desire to have kids displays.
Even if you assume that optimizing cumulative QUALYs is the proper goal of charitable donation, there are still problems of measurement and incentives of all the involved actors, much like the problems that plagued Communism:
Estimating the expected marginal QUALYs per dollar of a charitable donation is difficult. Any method would have to rely on a number of relatively strong assumptions. Charities have an incentive to find and exploit any loophole in the evaluation methods, as per Campbell’s law/Goodhart’s law/Lucas critique.
Individual donors can’t plausibly estimate the expected marginal QUALYs/$ of charities, they have to rely on meta-charities like GiveWell. But how you estimate the performance of GiveWell? Given that estimation is costly, GiveWell has no incentive to become any better, it actually has an incentive to become worse. Even if the people currently running GiveWell are honest and competent, they might fall victim to greed or self-serving biases that could make them overestimate their own performance, especially since they lack any independent form of evaluation or model to compare with. Or the honest and competent people could be replaced by less honest and less competent people. Or GiveWell as a whole could be driven out of business and replaced by a competitor that spends less on estimation quality and more on PR. The whole industry has a real possibility of becoming a Market for Lemons.
The main reason most people donate to charities may be to signal status to others, or to “purchase warm fuzzies” (a form of status signalling to one’s own ego).
Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and “rationality” are well received, and/or similarly a way to “purchase warm fuzzies” for somebody wishing to maintain a self-image of utilitarian/”rationalist”.
To this end, effective altruism doesn’t have to be actually effective, it could just superficially pretend to be.
Yes, I think this there are people for whom this is true. However, the best way to get such people to actually do good is to make “pretending to actually do good” and “actually doing good” equivalently costly, by calling them out when they do the latter (EDIT: former).
I personally want effective altruism to actually do good, not just satisfy people’s social desires (though as Diego points out, this is also important). If it turns out that the point of the EA movement becomes to help people signal to a particular consequentialist set, then my hypothetical apostasy will become an actual apostasy, so I’m still going to list this as a critique.
Individual donors can’t plausibly estimate the expected marginal QUALYs/$ of charities, they have to rely on meta-charities like GiveWell. But how you estimate the performance of GiveWell? Given that estimation is costly, GiveWell has no incentive to become any better, it actually has an incentive to become worse. Even if the people currently running GiveWell are honest and competent, they might fall victim to greed or self-serving biases that could make them overestimate their own performance, especially since they lack any independent form of evaluation or model to compare with. Or the honest and competent people could be replaced by less honest and less competent people. Or GiveWell as a whole could be driven out of business and replaced by a competitor that spends less on estimation quality and more on PR. The whole industry has a real possibility of becoming a Market for Lemons.
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we’re going to get (EDIT: not actually—as Carl Shulman points out, more competitors would be better, but without a lot of extra effort, it’s hard to beat). Fundamentally, it seems like anything altruistic we do is going to have to rely on at least a few “heroic” people who are responding to a desire to actually do good rather than social signalling.
Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I’d like to hear your others (by PM if you like).
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we’re going to get
I think it would be better with more competitors in the same space keeping each other honest.
However, the best way to get such people to actually do good is to make “pretending to actually do good” and “actually doing good” equivalently costly, by calling them out when they do the latter.
I’m not sure what you mean by the last clause. Do you mean “calling them out when they do the former”? Or do you mean “making the primary way to pretend to actually do good such that it actually does good”?
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
This is nice to hear. Still, you have to trust them to report their own shortcomings accurately. And if more and more people join EA for status reasons, GiveWell and related organizations may become less incentivized to achieve high performance.
Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I’d like to hear your others (by PM if you like).
Mostly these are the reasons I can think of. Maybe I could also add that donations to people in impoverished communities might create market distortions with difficult to asses results, but I suppose that this could be lumped in the estimation difficulties category of objections.
Effective altruism is based on a form of total utilitarianism
This is not true (and incidentally is a pet peeve of mine). I know plenty of EAs who are not utilitarian EAs. Most EAs I know would dispute this (at least in conversation on the EA facebook group there appears to be a consensus that EA ≠ utilitarianism).
I am curious as to what makes you (/anyone) think this. Could you enlighten me?
I do NOT endorse effective altruism
This statement also interests me too.
What do you mean that you do not endorse EA?
Are you referring to the idea of applying reason/rationality to doing good?
Are you saying that you do not support the movement or the people in it?
Do you simply mean that advocating EA just happens to be a thing you have never done?
This is not true (and incidentally is a pet peeve of mine). I know plenty of EAs who are not utilitarian EAs. Most EAs I know would dispute this (at least in conversation on the EA facebook group there appears to be a consensus that EA ≠ utilitarianism).
Effective altruism is not the same as utilitarianism, but it is certainly based on it. How else would you call trying to maximize a numeric measure of cumulative good?
What do you mean that you do not endorse EA?
I think I’ve already responded in the parent comment.
Effective altruism is not the same as utilitarianism, but it is certainly based on it. How else would you call trying to maximize a numeric measure of cumulative good?
This is incorrect. Effective altruism is applying rationality to doing good (http://en.wikipedia.org/wiki/Effective_altruism).
It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5).
It does require quantifying things, as much as making any other rational decision requires quantifying things.
I think I’ve already responded in the parent comment.
No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.
It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5). It does require quantifying things, as much as making any other rational decision requires quantifying things.
Fair enough. I think it could be said that while the philosophy behind EA is rooted in total utilitarianism, people who practice EA can further constrain it within a deontological moral system. (I suppose that this true even of people who explicitly proclaim themselves utilitarians).
No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.
I wonder that too. If you agree with many of my criticisms, why do you still endorse EA?
The term “EA” is undoubtedly based on a form of total utilitarianism. Whatever the term means today, and whatever Wikipedia says (which, incidentally, weeatquince helped to write, though I can’t remember if he wrote the part he is referring to), the motivation behind the creation of the term was the need for a much more palatable and slightly broader term for total utilitarianism.
I don’t understand what this means. How does one signal to one’s ‘ego’? What information is being conveyed, and to whom?
Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling
These could both be true at different explanatory levels. What are we taking to be the site of ‘really caring’? The person’s conscious desires? The person’s conscious volition and decision-making? The person’s actions and results?
Difficulties in distinguishing supererogatory actions from morally required actions
What’s the import of the distinction? Presumably we should treat actions as obligatory when that makes the world a better place, and as non-obligatory but praiseworthy when that makes the world a better place. Does there need to be a fact of the matter about how mad morality will be at you if you don’t help people?
Take two otherwise as-identical-as-possible worlds, and make everything obligatory in one world, nothing obligatory in the other. What physical or psychological change distinguishes the two?
I don’t understand what this means. How does one signal to one’s ‘ego’? What information is being conveyed, and to whom?
I’m talking about self-deception, essentially. A perfectly rational agent would not be able to do that, but people aren’t perfectly rational agents, and they are capable of self deception, and sometimes they do that deliberately, sometimes it is unconscious. Wishful thinking and Confirmation bias are instances of this.
These could both be true at different explanatory levels. What are we taking to be the site of ‘really caring’? The person’s conscious desires? The person’s conscious volition and decision-making? The person’s actions and results?
Consider Revealed preferences. Are someone’s actions more consistent with their stated goals or with status seeking and signalling?
What’s the import of the distinction? Presumably we should treat actions as obligatory when that makes the world a better place, and as non-obligatory but praiseworthy when that makes the world a better place.
I’m not sure I can follow you here. This looks like circular reasoning.
I’m not sure I can follow you here. This looks like circular reasoning.
I’m not sure what RobBB meant, but something like this, perhaps:
Utilitarianism doesn’t have fundamental concepts of “obligatory” or “supererogatory”, only “more good” and “less good”. A utilitarian saying “X is obligatory but Y is supererogatory” unpacks to “I’m going to be more annoyed/moralize more/cooperate less at you if you fail to do X than if you fail to do Y”. A utilitarian can pick a strategy for which things to get annoyed/moralize/be uncooperative about according to which strategy maximizes utility.
How does one signal to one’s ‘ego’? What information is being conveyed, and to whom?
Praising and blaming oneself seems a ubiquitous feature of life to me...but then I am starting from an observation, not from a theory of how egos work.
A few thoughts (disclaimer: I do NOT endorse effective altruism):
The main reason most people donate to charities may be to signal status to others, or to “purchase warm fuzzies” (a form of status signalling to one’s own ego).
Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and “rationality” are well received, and/or similarly a way to “purchase warm fuzzies” for somebody wishing to maintain a self-image of utilitarian/”rationalist”.
To this end, effective altruism doesn’t have to be actually effective, it could just superficially pretend to be.
Effective altruism is based on a form of total utilitarianism, thus it is subject to the standard problems of this moral philosophy:
Interpersonal utility comparison: metrics such as QUALY are supposed to be interpersonally comparable proxies for utility, but they are highly debatable.
The repugnant conclusion: optimizing for cumulative QUALYs may lead to a world where the majority of the population live lives only barely worth living. Note that this isn’t merely a theoretical concern: as Carl Shulman pointed out, GiveWell’s top-ranked charities might well be already pushing in that direction.
Difficulties in distinguishing supererogatory actions from morally required actions, as your example of the person questioning their own desire to have kids displays.
Even if you assume that optimizing cumulative QUALYs is the proper goal of charitable donation, there are still problems of measurement and incentives of all the involved actors, much like the problems that plagued Communism:
Estimating the expected marginal QUALYs per dollar of a charitable donation is difficult. Any method would have to rely on a number of relatively strong assumptions. Charities have an incentive to find and exploit any loophole in the evaluation methods, as per Campbell’s law/Goodhart’s law/Lucas critique.
Individual donors can’t plausibly estimate the expected marginal QUALYs/$ of charities, they have to rely on meta-charities like GiveWell. But how you estimate the performance of GiveWell? Given that estimation is costly, GiveWell has no incentive to become any better, it actually has an incentive to become worse. Even if the people currently running GiveWell are honest and competent, they might fall victim to greed or self-serving biases that could make them overestimate their own performance, especially since they lack any independent form of evaluation or model to compare with. Or the honest and competent people could be replaced by less honest and less competent people. Or GiveWell as a whole could be driven out of business and replaced by a competitor that spends less on estimation quality and more on PR. The whole industry has a real possibility of becoming a Market for Lemons.
Yes, I think this there are people for whom this is true. However, the best way to get such people to actually do good is to make “pretending to actually do good” and “actually doing good” equivalently costly, by calling them out when they do the latter (EDIT: former).
I personally want effective altruism to actually do good, not just satisfy people’s social desires (though as Diego points out, this is also important). If it turns out that the point of the EA movement becomes to help people signal to a particular consequentialist set, then my hypothetical apostasy will become an actual apostasy, so I’m still going to list this as a critique.
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, “mistakes” tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we’re going to get (EDIT: not actually—as Carl Shulman points out, more competitors would be better, but without a lot of extra effort, it’s hard to beat). Fundamentally, it seems like anything altruistic we do is going to have to rely on at least a few “heroic” people who are responding to a desire to actually do good rather than social signalling.
Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I’d like to hear your others (by PM if you like).
I think it would be better with more competitors in the same space keeping each other honest.
Ah, good point. Weakened.
Not necessarily, a lot of competitors might result in competition on providing plausible fuzzes rather than honesty.
I’m not sure what you mean by the last clause. Do you mean “calling them out when they do the former”? Or do you mean “making the primary way to pretend to actually do good such that it actually does good”?
I meant “former”. Sorry for the confusion.
This is nice to hear. Still, you have to trust them to report their own shortcomings accurately. And if more and more people join EA for status reasons, GiveWell and related organizations may become less incentivized to achieve high performance.
Mostly these are the reasons I can think of. Maybe I could also add that donations to people in impoverished communities might create market distortions with difficult to asses results, but I suppose that this could be lumped in the estimation difficulties category of objections.
This is not true (and incidentally is a pet peeve of mine). I know plenty of EAs who are not utilitarian EAs. Most EAs I know would dispute this (at least in conversation on the EA facebook group there appears to be a consensus that EA ≠ utilitarianism).
I am curious as to what makes you (/anyone) think this. Could you enlighten me?
This statement also interests me too. What do you mean that you do not endorse EA?
Are you referring to the idea of applying reason/rationality to doing good?
Are you saying that you do not support the movement or the people in it?
Do you simply mean that advocating EA just happens to be a thing you have never done?
Are you not altruistic/ethical?
Effective altruism is not the same as utilitarianism, but it is certainly based on it. How else would you call trying to maximize a numeric measure of cumulative good?
I think I’ve already responded in the parent comment.
This is incorrect. Effective altruism is applying rationality to doing good (http://en.wikipedia.org/wiki/Effective_altruism). It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5). It does require quantifying things, as much as making any other rational decision requires quantifying things.
No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.
Fair enough. I think it could be said that while the philosophy behind EA is rooted in total utilitarianism, people who practice EA can further constrain it within a deontological moral system. (I suppose that this true even of people who explicitly proclaim themselves utilitarians).
I wonder that too. If you agree with many of my criticisms, why do you still endorse EA?
The term “EA” is undoubtedly based on a form of total utilitarianism. Whatever the term means today, and whatever Wikipedia says (which, incidentally, weeatquince helped to write, though I can’t remember if he wrote the part he is referring to), the motivation behind the creation of the term was the need for a much more palatable and slightly broader term for total utilitarianism.
I don’t understand what this means. How does one signal to one’s ‘ego’? What information is being conveyed, and to whom?
These could both be true at different explanatory levels. What are we taking to be the site of ‘really caring’? The person’s conscious desires? The person’s conscious volition and decision-making? The person’s actions and results?
What’s the import of the distinction? Presumably we should treat actions as obligatory when that makes the world a better place, and as non-obligatory but praiseworthy when that makes the world a better place. Does there need to be a fact of the matter about how mad morality will be at you if you don’t help people?
Take two otherwise as-identical-as-possible worlds, and make everything obligatory in one world, nothing obligatory in the other. What physical or psychological change distinguishes the two?
I’m talking about self-deception, essentially. A perfectly rational agent would not be able to do that, but people aren’t perfectly rational agents, and they are capable of self deception, and sometimes they do that deliberately, sometimes it is unconscious. Wishful thinking and Confirmation bias are instances of this.
Consider Revealed preferences. Are someone’s actions more consistent with their stated goals or with status seeking and signalling?
I’m not sure I can follow you here. This looks like circular reasoning.
I’m not sure what RobBB meant, but something like this, perhaps:
Praising and blaming oneself seems a ubiquitous feature of life to me...but then I am starting from an observation, not from a theory of how egos work.