As ZankerH said, it leaves out the “required to make” part. Also, gjm’s particular formulation of 2′ makes a statement about comparisons between two given decisions, not a statement about the entire search space of possible decisions.
Exactly what ZankerH and DaFranker said. You could augment a theory consisting of 1, 2′, and 3 with further propositions like “It is morally obligatory to do the morally best thing you can on all occasions” or (after further work to define the quantities involved) less demanding ones like “It is morally obligatory to act so as not to decrease expected total utility” or “It is morally obligatory to act in a way that falls short of the maximum achievable total utility by no more than X”. Or you could stick with 1,2′,3 and worry about questions like “what shall I do?” and “is A morally better than B?” rather than “is it obligatory to do A?”. After all, most of the things we do (even ones explicitly informed by moral considerations) aren’t simply a matter of obeying moral obligations.
If you don’t use the “required to make” part, then if you tell me “you should do ___ to maximize utility” I can reply “so what?” It can be indistinguishable, in terms of what actions it makes me take, from not being a utilitarian.
Furthermore, while perhaps I am not obligated to maximize total utility all the time, it’s less plausible that I’m not obligated to maximize it to some extent—for instance, to at least be better at utility than someone we all think is pretty terrible, such as a serial killer. And even that limited degree of obligation produces many of the same problems as being obligated all the time. For instance, we typically think a serial killer is pretty terrible even if he gives away 90% of his income to charity. Am I, then, obliged to be better than such a person? If 20% of his income saves as many lives as are hurt by his serial killing, and if we have similar incomes, that implies I must give away at least 70% of my income to be better than him.
if you tell me “you should do _ to maximize utility” I can reply “so what?”
If I tell you “you are morally required to do X”, you can still reply “so what?”. One can reply “so what?” to anything, and the fact that a moral theory doesn’t prevent that is no objection to it.
(But, for clarity: what utilitarians say and others don’t is less “if you want to maximize utility, do ” than “you should do because it maximizes utility”. It’s not obvious to me which of those you meant.)
it’s less plausible that I’m not obligated to maximize it to some extent
A utilitarian might very well say that you are—hence my remark that various other “it is morally obligatory to …” statements could be part of a utilitarian theory. But what makes a theory utilitarian is not its choice of where to draw the line between obligatory and not-obligatory, but the fact that it makes moral judgements on the basis of an evaluation of overall utility.
serial killer [...] 90% [...] 20% [...] 70% [...]
I think it will become clear that this argument can’t be right if you consider a variant in which the serial killer’s income is much larger than yours: the conclusion would then be that nothing you can do can make you better than the serial killer. What’s gone wrong here is that when you say “a serial killer is terrible, so I have to be better than he is” you’re evaluating him on a basis that has little to do with net utility, whereas when you say “I must give away at least 70% of my income to be better than him” you’re switching to net utility. It’s not a big surprise if mixing incompatible moral systems gives counterintuitive results.
On a typical utilitarian theory:
the wealthy serial killer is producing more net positive utility than you are
he is producing a lot less net positive utility than he could by, e.g., not being a serial killer
if you tried to imitate him you’d produce a lot less net positive utility than you currently do
and the latter two points are roughly what we mean by saying he’s a very bad person and you should do better. But the metric by which he’s very bad and you should do better is something like “net utility, relative to what you’re in a position to produce”.
If I tell you “you are morally required to do X”, you can still reply “so what?”. One can reply “so what?” to anything, and the fact that a moral theory doesn’t prevent that is no objection to it.
But for the kind of utilitarianism you’re describing, if you tell me “you are morally required to do X”, I can say “so what” and be correct by your moral theory’s standards. I can’t do that in response to anything.
It does claim something else would be morally better. It doesn’t claim that you are obliged to do it. Why use the word “ought” only for the second and not the first?
It doesn’t seem that way to me. It seems to me that “ought” covers a fairly broad range of levels of obligation, so to speak; in cases of outright obligation I would be more inclined to use “must” than “ought”.
But the metric by which he’s very bad and you should do better is something like “net utility, relative to what you’re in a position to produce”.
I don’t think that saves it. In my scenario, me and the serial killer have similar incomes, but he kills people, and he also gives a lot of money to charity. I am in a position to produce what he produces.
Which means that according to strict utilitarianism you would do better to be like him than to be as you are now. Better still, of course, to do the giving without the mass-murdering.
But the counterintuitive thing here isn’t the demandingness of utilitarianism, but the fact that (at least in implausible artificial cases) it can reckon a serial killer’s way of life better than an ordinary person’s. What generates the possibly-misplaced sense of obligation is thinking of the serial killer as unusually bad when deciding that you have to do better, and then as unusually good when deciding what it means to do better. If you’re a utilitarian and your utility calculations say that the serial killer is doing an enormous amount of good with his donations, you shouldn’t also be seeing him as someone you have to do more good than because he’s so awful.
What generates the sense of obligation is that the serial killer is considered bad for reasons that have nothing to do with utility, including but not limited to the fact that he kills them directly (rather than using a computer, which contributes to global warming, which hurts people) and actively (he kills people rather than keeping money that would have saved their life). The charity-giving serial killer makes it obvious that the utilitarian assumption that more utility is better than less utility just isn’t true, for what actual human beings mean by good and bad.
Um, what’s the difference?
It’s possible to believe some action is morally better than another without feeling it’s required of you to do it.
As ZankerH said, it leaves out the “required to make” part. Also, gjm’s particular formulation of 2′ makes a statement about comparisons between two given decisions, not a statement about the entire search space of possible decisions.
Exactly what ZankerH and DaFranker said. You could augment a theory consisting of 1, 2′, and 3 with further propositions like “It is morally obligatory to do the morally best thing you can on all occasions” or (after further work to define the quantities involved) less demanding ones like “It is morally obligatory to act so as not to decrease expected total utility” or “It is morally obligatory to act in a way that falls short of the maximum achievable total utility by no more than X”. Or you could stick with 1,2′,3 and worry about questions like “what shall I do?” and “is A morally better than B?” rather than “is it obligatory to do A?”. After all, most of the things we do (even ones explicitly informed by moral considerations) aren’t simply a matter of obeying moral obligations.
If you don’t use the “required to make” part, then if you tell me “you should do ___ to maximize utility” I can reply “so what?” It can be indistinguishable, in terms of what actions it makes me take, from not being a utilitarian.
Furthermore, while perhaps I am not obligated to maximize total utility all the time, it’s less plausible that I’m not obligated to maximize it to some extent—for instance, to at least be better at utility than someone we all think is pretty terrible, such as a serial killer. And even that limited degree of obligation produces many of the same problems as being obligated all the time. For instance, we typically think a serial killer is pretty terrible even if he gives away 90% of his income to charity. Am I, then, obliged to be better than such a person? If 20% of his income saves as many lives as are hurt by his serial killing, and if we have similar incomes, that implies I must give away at least 70% of my income to be better than him.
If I tell you “you are morally required to do X”, you can still reply “so what?”. One can reply “so what?” to anything, and the fact that a moral theory doesn’t prevent that is no objection to it.
(But, for clarity: what utilitarians say and others don’t is less “if you want to maximize utility, do ” than “you should do because it maximizes utility”. It’s not obvious to me which of those you meant.)
A utilitarian might very well say that you are—hence my remark that various other “it is morally obligatory to …” statements could be part of a utilitarian theory. But what makes a theory utilitarian is not its choice of where to draw the line between obligatory and not-obligatory, but the fact that it makes moral judgements on the basis of an evaluation of overall utility.
I think it will become clear that this argument can’t be right if you consider a variant in which the serial killer’s income is much larger than yours: the conclusion would then be that nothing you can do can make you better than the serial killer. What’s gone wrong here is that when you say “a serial killer is terrible, so I have to be better than he is” you’re evaluating him on a basis that has little to do with net utility, whereas when you say “I must give away at least 70% of my income to be better than him” you’re switching to net utility. It’s not a big surprise if mixing incompatible moral systems gives counterintuitive results.
On a typical utilitarian theory:
the wealthy serial killer is producing more net positive utility than you are
he is producing a lot less net positive utility than he could by, e.g., not being a serial killer
if you tried to imitate him you’d produce a lot less net positive utility than you currently do
and the latter two points are roughly what we mean by saying he’s a very bad person and you should do better. But the metric by which he’s very bad and you should do better is something like “net utility, relative to what you’re in a position to produce”.
But for the kind of utilitarianism you’re describing, if you tell me “you are morally required to do X”, I can say “so what” and be correct by your moral theory’s standards. I can’t do that in response to anything.
What do you mean by “correct”?
Your theory does not claim I ought to do something different.
It does claim something else would be morally better. It doesn’t claim that you are obliged to do it. Why use the word “ought” only for the second and not the first?
Because that is what most English-speaking human beings mean by “ought”.
It doesn’t seem that way to me. It seems to me that “ought” covers a fairly broad range of levels of obligation, so to speak; in cases of outright obligation I would be more inclined to use “must” than “ought”.
I don’t think that saves it. In my scenario, me and the serial killer have similar incomes, but he kills people, and he also gives a lot of money to charity. I am in a position to produce what he produces.
Which means that according to strict utilitarianism you would do better to be like him than to be as you are now. Better still, of course, to do the giving without the mass-murdering.
But the counterintuitive thing here isn’t the demandingness of utilitarianism, but the fact that (at least in implausible artificial cases) it can reckon a serial killer’s way of life better than an ordinary person’s. What generates the possibly-misplaced sense of obligation is thinking of the serial killer as unusually bad when deciding that you have to do better, and then as unusually good when deciding what it means to do better. If you’re a utilitarian and your utility calculations say that the serial killer is doing an enormous amount of good with his donations, you shouldn’t also be seeing him as someone you have to do more good than because he’s so awful.
What generates the sense of obligation is that the serial killer is considered bad for reasons that have nothing to do with utility, including but not limited to the fact that he kills them directly (rather than using a computer, which contributes to global warming, which hurts people) and actively (he kills people rather than keeping money that would have saved their life). The charity-giving serial killer makes it obvious that the utilitarian assumption that more utility is better than less utility just isn’t true, for what actual human beings mean by good and bad.