Agreed, but the OP was talking about “effective altruism” , rather than about “effective morality” in general. It’s difficult to talk about altruism at all except within some sort of consequentialist framework. And while there is no simple way of comparing goods, consideration of “effective” altruism (how much good can I do for a relatively small amount of money?) does force us to look at and make very difficult tradeoffs between different goods.
Incidentally, I generally subscribe to rule consequentialism though without any simple utility function, and for much the reasons you discuss. Avoiding vicious disputes between social agents with different values is, as I understand it, one of the “good things” that a system of moral rules needs to achieve.
Rule consequentialism is what a call a multi-threaded moral theory—a blend of deontology and consequentialism if you will. I advocate multi-threaded theories. The idea that there is a correct single-threaded theory of morality seems implausible. Moral rules to me are a subset of modal rules for survival-focused agents.
To work out if something is right run a bunch of ‘algorithms’ (in parallel threads if you like) not just one. (No commitment made to Turing computability of said ‘algorithms’ though...)
So...
#assume virtue ethics
If I do X what virtues does this display/exhibit?
#assume categorical imperative
If everyone does X how would I value the world then?
#assume principle of utility
Will X increase the greatest happiness for the greatest number?
#assume golden rule
If X were done to me instead of my doing X would I accept this?
#emotions
If I do X will this trigger any emotional reaction (disgust, guilt, shame, embarrassment, joy, ecstasy, triumph etc)
#laws
Is there is law or sanction if I do X?
#precedent
Have I done X before, how did that go?
#relationships
If I do X what impact will that have on relationships I have?
#motives goal
Do I want to do X?
#interest welfare prudence
Is X in my interest? Safe? Dangerous etc
#value
Does X have value? To me, to others etc
Sometimes one or two reasons will provide a slam dunk decision. It’s illegal and I don’t want to do it anyway. Othertimes, the call is harder.
Personally, I find a range of considerations more persuasive than one. I am personally inclined to sentimentalism at the meta-ethical tier and particularism at the normative and applied ethical tiers.
Of course, strictly speaking particularism implies that normative ethical theories are false over-generalizations and that a theory of reasons rests on a theory of values. Values are fundamentally emotive. No amount of post hoc moral rationalization will change that.
Agreed, but the OP was talking about “effective altruism” , rather than about “effective morality” in general. It’s difficult to talk about altruism at all except within some sort of consequentialist framework. And while there is no simple way of comparing goods, consideration of “effective” altruism (how much good can I do for a relatively small amount of money?) does force us to look at and make very difficult tradeoffs between different goods.
Incidentally, I generally subscribe to rule consequentialism though without any simple utility function, and for much the reasons you discuss. Avoiding vicious disputes between social agents with different values is, as I understand it, one of the “good things” that a system of moral rules needs to achieve.
Rule consequentialism is what a call a multi-threaded moral theory—a blend of deontology and consequentialism if you will. I advocate multi-threaded theories. The idea that there is a correct single-threaded theory of morality seems implausible. Moral rules to me are a subset of modal rules for survival-focused agents.
To work out if something is right run a bunch of ‘algorithms’ (in parallel threads if you like) not just one. (No commitment made to Turing computability of said ‘algorithms’ though...)
So...
#assume virtue ethics
If I do X what virtues does this display/exhibit?
#assume categorical imperative
If everyone does X how would I value the world then?
#assume principle of utility
Will X increase the greatest happiness for the greatest number?
#assume golden rule
If X were done to me instead of my doing X would I accept this?
#emotions
If I do X will this trigger any emotional reaction (disgust, guilt, shame, embarrassment, joy, ecstasy, triumph etc)
#laws
Is there is law or sanction if I do X?
#precedent
Have I done X before, how did that go?
#relationships
If I do X what impact will that have on relationships I have?
#motives goal
Do I want to do X?
#interest welfare prudence
Is X in my interest? Safe? Dangerous etc
#value
Does X have value? To me, to others etc
Sometimes one or two reasons will provide a slam dunk decision. It’s illegal and I don’t want to do it anyway. Othertimes, the call is harder.
Personally, I find a range of considerations more persuasive than one. I am personally inclined to sentimentalism at the meta-ethical tier and particularism at the normative and applied ethical tiers.
Of course, strictly speaking particularism implies that normative ethical theories are false over-generalizations and that a theory of reasons rests on a theory of values. Values are fundamentally emotive. No amount of post hoc moral rationalization will change that.