Effective altruism is not the same as utilitarianism, but it is certainly based on it. How else would you call trying to maximize a numeric measure of cumulative good?
This is incorrect. Effective altruism is applying rationality to doing good (http://en.wikipedia.org/wiki/Effective_altruism).
It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5).
It does require quantifying things, as much as making any other rational decision requires quantifying things.
I think I’ve already responded in the parent comment.
No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.
It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5). It does require quantifying things, as much as making any other rational decision requires quantifying things.
Fair enough. I think it could be said that while the philosophy behind EA is rooted in total utilitarianism, people who practice EA can further constrain it within a deontological moral system. (I suppose that this true even of people who explicitly proclaim themselves utilitarians).
No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.
I wonder that too. If you agree with many of my criticisms, why do you still endorse EA?
The term “EA” is undoubtedly based on a form of total utilitarianism. Whatever the term means today, and whatever Wikipedia says (which, incidentally, weeatquince helped to write, though I can’t remember if he wrote the part he is referring to), the motivation behind the creation of the term was the need for a much more palatable and slightly broader term for total utilitarianism.
This is incorrect. Effective altruism is applying rationality to doing good (http://en.wikipedia.org/wiki/Effective_altruism). It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5). It does require quantifying things, as much as making any other rational decision requires quantifying things.
No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.
Fair enough. I think it could be said that while the philosophy behind EA is rooted in total utilitarianism, people who practice EA can further constrain it within a deontological moral system. (I suppose that this true even of people who explicitly proclaim themselves utilitarians).
I wonder that too. If you agree with many of my criticisms, why do you still endorse EA?
The term “EA” is undoubtedly based on a form of total utilitarianism. Whatever the term means today, and whatever Wikipedia says (which, incidentally, weeatquince helped to write, though I can’t remember if he wrote the part he is referring to), the motivation behind the creation of the term was the need for a much more palatable and slightly broader term for total utilitarianism.