Consequentialism: The morality of actions depends only on their consequences.
Deontology: There are moral principles that forbid certain actions and encourage other actions purely based on the nature of the action itself, not on its consequences.
Virtue ethics: Ethical theory should not be in the business of evaluating actions. It should be in the business of evaluating character traits. The fundamental question of ethics is not “What makes an action right or wrong?” It is “What makes a person good or bad?”
All three in weighted combination, with consequentialism scaling such that it becomes dominant in high-stakes scenarios but is not dominant elsewhere. I believe that consequentialism, deontology and virtue ethics are mutually reducible and mutually justifying, but that flattening them into any one of the three is bad because it raises the error rate, by making some values much harder to describe and eliminating redundancy in values that would have protected them from corruption.
So, yes, in many cases I make decisions based on moral principles, because the alternatives are computationally intractable. And in a few cases I judge character traits as a proxy for doing either. And I endorse all of that, under the circumstances. Which sounds like what you’re describing.
But if I discovered that one of my moral principles was causing me to act in ways that had consequences I anti-value, I would endorse discarding that principle. Which seems to me like I’m a consequentialist who sometimes uses moral principles as a processing shortcut.
Were I actually a deontologist, as described here, presumably I would shrug my shoulders, perhaps regret the negative consequences of my moral principle (perhaps not), and go on using it.
Admittedly, I’m not sure I have a crisp understanding of the distinction between moral principles (which consequentialism on this account ignores) and values (on which it depends).
Rule consequentialism is either consequentialism or deontology (or just inconsistent). What makes it the case that you should follow the rules? If it is that the following the rules maximizes expected utility, then it’s ultimately consequentialism. Otherwise, it’s most likely deontology.
A common formulation is that the “rules” are the ones which if generally adopted as a moral code would maximize expected utility: i.e. there is a form of “best” or “ideal” moral code.
However, this can lead to cases where an act which would (by itself) maximize expected utility would also be in violation of the ideal moral code. So the act would be “right” from an act utilitarian point of view, but “wrong” from a rule utilitarian point of view.
Relevant examples here could include torturing someone “for a greater good” (such as to stop the infamous ticking time bomb). The logic for torture in such cases seems very sound from an act utilitarian perspective; however, an ideal moral code would have a rule of the form “Don’t torture anyone, ever, for any reason, no matter if it appears to lead to a greater good”. This, incidentally, is one resolution to Torture vs Dust Specks.
Right, but if the moral code is really ideal on consequentialist grounds and following the rules really leads to better expected outcomes for humans than not doing so, even when it appears otherwise, then the act consequentialist should also agree that you should follow the rule even when it appears to be sub-optimal.
On the other hand, if the claim is that an ideal reasoner with full knowledge should follow the rule even when it provably does not maximize expected utility, then that’s a form of deontology and a consequentialist should disagree.
There is a recognized distinction here between a moral decision procedure and the criterion for an action to be right or wrong.
Pretty much all serious act utilitarians approve a rule-utilitarian decision procedure i.e. they recommend moral agents to follow the usual moral rules (or heuristics) even in those cases where the agent believes that departing from the rules would lead to better consequences. The justification for such a decision procedure is of course that humans are not ideal reasoners, we can not predict and evaluate all consequences of our actions (including others imitating us), we do not have an ideal, impartial conception of the good, and we tend to get things horribly wrong when we depart from the rules with the best of intentions.
Yet still, from an act utilitarian criterion for “right” and “wrong” a rule-violating action which maximizes expected utility is “right”. This leads to some odd situations, whereby the act utilitarian would have to (privately) classify such a rule-violating action as right, but publically condemn it, call it the “wrong choice”, quite possibly punish it, and generally discourage people from following it!
Yes, I was (improperly) ignoring the typically backward-looking nature of act utilitarianism. I kept saying “maximize expected utility” rather than “maximize utility” which resulted in true statements that did not reflect what act utilitarians really say.
I blame the principle of charity.
EDIT: And if I were being really careful, I’d make sure to phrase “maximize expected utility” in such a way that it’s clear that you’re maximizing the utility according to your expectations, not maximizing your expectations of utility (wireheading).
Voted for “lean toward consequentialism”. As someone once put, I consider the “fundamental” rules to be consequentialist¹, but some of the approximations I use because the fundamental rules are infeasible to calculate from scratch every time resemble deontology or virtue ethics, much like QFT and GR are time-reversal symmetric but thermodynamics isn’t. Also, ethical injunctions (i.e. fudge factors in my prior probability that certain behaviours will harm someone to compensate for cognitive biases) and TDT-like game-/decision-theoretical considerations make some of my choices resemble deontology, and a term in my utility function for how awesome I am make some of my choices resemble virtue ethics.
I assume that, despite the name, people here don’t take consequentialism to imply strictly CDT. I still think that in the True Prisoner’s Dilemma against a paperclip maximizer known to use the same decision algorithms as ourselves it’s immoral to defect.
I assume that, despite the name, people here don’t take consequentialism to imply strictly CDT. I still think that in the True Prisoner’s Dilemma against a paperclip maximizer known to use the same decision algorithms as ourselves it’s immoral to defect.
May the reason why so many philosophers don’t vote for consquentialism is that they’re thinking about pure CDTical act consequentialism?
Fair enough. I should have been more specific. I’m a particularist who thinks consequentialist reasoning is appropriate in certain contexts, but deontological reasoning is appropriate in other contexts. So I’m pretty sure “Other” is the right pick for me.
Is that possible? Can you both think a) that one should in general act so as to maximise happiness/utility/whatever, and b) there are no general moral rules?
Consequentialism doesn’t require a commitment to maximization of any particular variable. It’s the claim that only the consequences of actions are relevant to moral evaluation of the actions. I think that’s a weak enough claim that you can’t really call it a general moral principle. So one could believe that only consequences are morally relevant, but the way in which one evaluates actions based on their consequences does not conform to any general principle.
If Luke had said that he’s a utilitarian who is also a particularist, that would have been a contradiction.
I think that’s a weak enough claim that you can’t really call it a general moral principle.
That’s a good point. So I should take from Luke’s claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that? And that he would say that it’s possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?
Normative ethics: consequentialism, deontology or virtue ethics?
[pollid:86]
Consequentialism: The morality of actions depends only on their consequences.
Deontology: There are moral principles that forbid certain actions and encourage other actions purely based on the nature of the action itself, not on its consequences.
Virtue ethics: Ethical theory should not be in the business of evaluating actions. It should be in the business of evaluating character traits. The fundamental question of ethics is not “What makes an action right or wrong?” It is “What makes a person good or bad?”
All three in weighted combination, with consequentialism scaling such that it becomes dominant in high-stakes scenarios but is not dominant elsewhere. I believe that consequentialism, deontology and virtue ethics are mutually reducible and mutually justifying, but that flattening them into any one of the three is bad because it raises the error rate, by making some values much harder to describe and eliminating redundancy in values that would have protected them from corruption.
Thinking about this...
So, yes, in many cases I make decisions based on moral principles, because the alternatives are computationally intractable. And in a few cases I judge character traits as a proxy for doing either. And I endorse all of that, under the circumstances. Which sounds like what you’re describing.
But if I discovered that one of my moral principles was causing me to act in ways that had consequences I anti-value, I would endorse discarding that principle. Which seems to me like I’m a consequentialist who sometimes uses moral principles as a processing shortcut.
Were I actually a deontologist, as described here, presumably I would shrug my shoulders, perhaps regret the negative consequences of my moral principle (perhaps not), and go on using it.
Admittedly, I’m not sure I have a crisp understanding of the distinction between moral principles (which consequentialism on this account ignores) and values (on which it depends).
I voted “other” for the same reason, though I’m less certain about virtue ethics being being equivalent to the other two.
I lean toward Consequentialism but I support something like deontology/virtue ethics for reasons of personal computability.
How should we vote for “rule consequentialism”?
I went for “Lean toward consequentialism” though it is arguably a form of deontology. “Other” is not very precise.
Rule consequentialism is either consequentialism or deontology (or just inconsistent). What makes it the case that you should follow the rules? If it is that the following the rules maximizes expected utility, then it’s ultimately consequentialism. Otherwise, it’s most likely deontology.
A common formulation is that the “rules” are the ones which if generally adopted as a moral code would maximize expected utility: i.e. there is a form of “best” or “ideal” moral code.
However, this can lead to cases where an act which would (by itself) maximize expected utility would also be in violation of the ideal moral code. So the act would be “right” from an act utilitarian point of view, but “wrong” from a rule utilitarian point of view.
Relevant examples here could include torturing someone “for a greater good” (such as to stop the infamous ticking time bomb). The logic for torture in such cases seems very sound from an act utilitarian perspective; however, an ideal moral code would have a rule of the form “Don’t torture anyone, ever, for any reason, no matter if it appears to lead to a greater good”. This, incidentally, is one resolution to Torture vs Dust Specks.
Right, but if the moral code is really ideal on consequentialist grounds and following the rules really leads to better expected outcomes for humans than not doing so, even when it appears otherwise, then the act consequentialist should also agree that you should follow the rule even when it appears to be sub-optimal.
On the other hand, if the claim is that an ideal reasoner with full knowledge should follow the rule even when it provably does not maximize expected utility, then that’s a form of deontology and a consequentialist should disagree.
There is a recognized distinction here between a moral decision procedure and the criterion for an action to be right or wrong.
Pretty much all serious act utilitarians approve a rule-utilitarian decision procedure i.e. they recommend moral agents to follow the usual moral rules (or heuristics) even in those cases where the agent believes that departing from the rules would lead to better consequences. The justification for such a decision procedure is of course that humans are not ideal reasoners, we can not predict and evaluate all consequences of our actions (including others imitating us), we do not have an ideal, impartial conception of the good, and we tend to get things horribly wrong when we depart from the rules with the best of intentions.
Yet still, from an act utilitarian criterion for “right” and “wrong” a rule-violating action which maximizes expected utility is “right”. This leads to some odd situations, whereby the act utilitarian would have to (privately) classify such a rule-violating action as right, but publically condemn it, call it the “wrong choice”, quite possibly punish it, and generally discourage people from following it!
Yes, I was (improperly) ignoring the typically backward-looking nature of act utilitarianism. I kept saying “maximize expected utility” rather than “maximize utility” which resulted in true statements that did not reflect what act utilitarians really say.
I blame the principle of charity.
EDIT: And if I were being really careful, I’d make sure to phrase “maximize expected utility” in such a way that it’s clear that you’re maximizing the utility according to your expectations, not maximizing your expectations of utility (wireheading).
I accept consequentialism but I also believe that “acting like I’m following virtue ethics” tends to have the best consequences.
Voted for “lean toward consequentialism”. As someone once put, I consider the “fundamental” rules to be consequentialist¹, but some of the approximations I use because the fundamental rules are infeasible to calculate from scratch every time resemble deontology or virtue ethics, much like QFT and GR are time-reversal symmetric but thermodynamics isn’t. Also, ethical injunctions (i.e. fudge factors in my prior probability that certain behaviours will harm someone to compensate for cognitive biases) and TDT-like game-/decision-theoretical considerations make some of my choices resemble deontology, and a term in my utility function for how awesome I am make some of my choices resemble virtue ethics.
I assume that, despite the name, people here don’t take consequentialism to imply strictly CDT. I still think that in the True Prisoner’s Dilemma against a paperclip maximizer known to use the same decision algorithms as ourselves it’s immoral to defect.
May the reason why so many philosophers don’t vote for consquentialism is that they’re thinking about pure CDTical act consequentialism?
Depends again on the level of discourse. Ultimately consequentialism, but a whole lot of deontology and virtue ethics in “real life”.
Moral particularism
For the record, I consider myself a consequentialist who is also a moral particularist.
Fair enough. I should have been more specific. I’m a particularist who thinks consequentialist reasoning is appropriate in certain contexts, but deontological reasoning is appropriate in other contexts. So I’m pretty sure “Other” is the right pick for me.
Is that possible? Can you both think a) that one should in general act so as to maximise happiness/utility/whatever, and b) there are no general moral rules?
I think that’s a contradiction.
Consequentialism doesn’t require a commitment to maximization of any particular variable. It’s the claim that only the consequences of actions are relevant to moral evaluation of the actions. I think that’s a weak enough claim that you can’t really call it a general moral principle. So one could believe that only consequences are morally relevant, but the way in which one evaluates actions based on their consequences does not conform to any general principle.
If Luke had said that he’s a utilitarian who is also a particularist, that would have been a contradiction.
That’s a good point. So I should take from Luke’s claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that? And that he would say that it’s possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?
I probably shouldn’t speak for Luke, but I’m guessing the answer to this is yes. If it isn’t, then I don’t understand how he’s a particularist.
I don’t see why he should be committed to this claim.
I took it to mean that Luke is requiring an agent to be at least somewhat consequentialist before he even thinks of it in terms of a morality.