Rule consequentialism is either consequentialism or deontology (or just inconsistent). What makes it the case that you should follow the rules? If it is that the following the rules maximizes expected utility, then it’s ultimately consequentialism. Otherwise, it’s most likely deontology.
A common formulation is that the “rules” are the ones which if generally adopted as a moral code would maximize expected utility: i.e. there is a form of “best” or “ideal” moral code.
However, this can lead to cases where an act which would (by itself) maximize expected utility would also be in violation of the ideal moral code. So the act would be “right” from an act utilitarian point of view, but “wrong” from a rule utilitarian point of view.
Relevant examples here could include torturing someone “for a greater good” (such as to stop the infamous ticking time bomb). The logic for torture in such cases seems very sound from an act utilitarian perspective; however, an ideal moral code would have a rule of the form “Don’t torture anyone, ever, for any reason, no matter if it appears to lead to a greater good”. This, incidentally, is one resolution to Torture vs Dust Specks.
Right, but if the moral code is really ideal on consequentialist grounds and following the rules really leads to better expected outcomes for humans than not doing so, even when it appears otherwise, then the act consequentialist should also agree that you should follow the rule even when it appears to be sub-optimal.
On the other hand, if the claim is that an ideal reasoner with full knowledge should follow the rule even when it provably does not maximize expected utility, then that’s a form of deontology and a consequentialist should disagree.
There is a recognized distinction here between a moral decision procedure and the criterion for an action to be right or wrong.
Pretty much all serious act utilitarians approve a rule-utilitarian decision procedure i.e. they recommend moral agents to follow the usual moral rules (or heuristics) even in those cases where the agent believes that departing from the rules would lead to better consequences. The justification for such a decision procedure is of course that humans are not ideal reasoners, we can not predict and evaluate all consequences of our actions (including others imitating us), we do not have an ideal, impartial conception of the good, and we tend to get things horribly wrong when we depart from the rules with the best of intentions.
Yet still, from an act utilitarian criterion for “right” and “wrong” a rule-violating action which maximizes expected utility is “right”. This leads to some odd situations, whereby the act utilitarian would have to (privately) classify such a rule-violating action as right, but publically condemn it, call it the “wrong choice”, quite possibly punish it, and generally discourage people from following it!
Yes, I was (improperly) ignoring the typically backward-looking nature of act utilitarianism. I kept saying “maximize expected utility” rather than “maximize utility” which resulted in true statements that did not reflect what act utilitarians really say.
I blame the principle of charity.
EDIT: And if I were being really careful, I’d make sure to phrase “maximize expected utility” in such a way that it’s clear that you’re maximizing the utility according to your expectations, not maximizing your expectations of utility (wireheading).
How should we vote for “rule consequentialism”?
I went for “Lean toward consequentialism” though it is arguably a form of deontology. “Other” is not very precise.
Rule consequentialism is either consequentialism or deontology (or just inconsistent). What makes it the case that you should follow the rules? If it is that the following the rules maximizes expected utility, then it’s ultimately consequentialism. Otherwise, it’s most likely deontology.
A common formulation is that the “rules” are the ones which if generally adopted as a moral code would maximize expected utility: i.e. there is a form of “best” or “ideal” moral code.
However, this can lead to cases where an act which would (by itself) maximize expected utility would also be in violation of the ideal moral code. So the act would be “right” from an act utilitarian point of view, but “wrong” from a rule utilitarian point of view.
Relevant examples here could include torturing someone “for a greater good” (such as to stop the infamous ticking time bomb). The logic for torture in such cases seems very sound from an act utilitarian perspective; however, an ideal moral code would have a rule of the form “Don’t torture anyone, ever, for any reason, no matter if it appears to lead to a greater good”. This, incidentally, is one resolution to Torture vs Dust Specks.
Right, but if the moral code is really ideal on consequentialist grounds and following the rules really leads to better expected outcomes for humans than not doing so, even when it appears otherwise, then the act consequentialist should also agree that you should follow the rule even when it appears to be sub-optimal.
On the other hand, if the claim is that an ideal reasoner with full knowledge should follow the rule even when it provably does not maximize expected utility, then that’s a form of deontology and a consequentialist should disagree.
There is a recognized distinction here between a moral decision procedure and the criterion for an action to be right or wrong.
Pretty much all serious act utilitarians approve a rule-utilitarian decision procedure i.e. they recommend moral agents to follow the usual moral rules (or heuristics) even in those cases where the agent believes that departing from the rules would lead to better consequences. The justification for such a decision procedure is of course that humans are not ideal reasoners, we can not predict and evaluate all consequences of our actions (including others imitating us), we do not have an ideal, impartial conception of the good, and we tend to get things horribly wrong when we depart from the rules with the best of intentions.
Yet still, from an act utilitarian criterion for “right” and “wrong” a rule-violating action which maximizes expected utility is “right”. This leads to some odd situations, whereby the act utilitarian would have to (privately) classify such a rule-violating action as right, but publically condemn it, call it the “wrong choice”, quite possibly punish it, and generally discourage people from following it!
Yes, I was (improperly) ignoring the typically backward-looking nature of act utilitarianism. I kept saying “maximize expected utility” rather than “maximize utility” which resulted in true statements that did not reflect what act utilitarians really say.
I blame the principle of charity.
EDIT: And if I were being really careful, I’d make sure to phrase “maximize expected utility” in such a way that it’s clear that you’re maximizing the utility according to your expectations, not maximizing your expectations of utility (wireheading).