As far as I understand it, the text quoted here is implicitly relying on the social imperative “be as moral as possible”. This is where the “obligatory” comes from. The problem here is that the imperative “be as moral as possible” gets increasingly more difficult as more actions acquire moral weight. If one has internalized this imperative (which is realistic given the weight of societal pressure behind it), utilitarianism puts an unbearable moral weight on one’s metaphorical shoulders.
Of course, in reality, utilitarianism implies this degree of self-sacrifice only if you demand (possibly inhuman) moral perfection from oneself. The actual weight you have to accept is defined by whatever moral standard you accept for yourself. For example, you might decide to be at least as moral as the people around you, or you might decide to be as moral as you can without causing yourself major inconvenience, or you might decide to be as immoral as possible (though you probably shouldn’t do that, especially considering that it is probably about as difficult as being perfectly moral).
At the end of the day, utilitarianism is just a scale. What you do with that scale is up to you.
The actual weight you have to accept is defined by whatever moral standard you accept for yourself.
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he’s producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%.
If you avoid utilitarianism you can describe being “better than” the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.
Pretty much any such moral standard says that you must be better than him
Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn’t be that surprising—really, it is just the claim that utilitarianism may not align well with our intuitions.
Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If your goal in a moral theory is a way of quantifying your intuitions about morality, then by all means use a different approach. On the other hand, if your goal is to reason about actions in terms of their cumulative impact on the world around you, then utilitarianism presents the best option, any you may just have to bite the bullet when it comes to your intuitions.
As far as I understand it, the text quoted here is implicitly relying on the social imperative “be as moral as possible”. This is where the “obligatory” comes from. The problem here is that the imperative “be as moral as possible” gets increasingly more difficult as more actions acquire moral weight. If one has internalized this imperative (which is realistic given the weight of societal pressure behind it), utilitarianism puts an unbearable moral weight on one’s metaphorical shoulders.
Of course, in reality, utilitarianism implies this degree of self-sacrifice only if you demand (possibly inhuman) moral perfection from oneself. The actual weight you have to accept is defined by whatever moral standard you accept for yourself. For example, you might decide to be at least as moral as the people around you, or you might decide to be as moral as you can without causing yourself major inconvenience, or you might decide to be as immoral as possible (though you probably shouldn’t do that, especially considering that it is probably about as difficult as being perfectly moral).
At the end of the day, utilitarianism is just a scale. What you do with that scale is up to you.
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he’s producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%.
If you avoid utilitarianism you can describe being “better than” the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.
Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn’t be that surprising—really, it is just the claim that utilitarianism may not align well with our intuitions.
Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If your goal in a moral theory is a way of quantifying your intuitions about morality, then by all means use a different approach. On the other hand, if your goal is to reason about actions in terms of their cumulative impact on the world around you, then utilitarianism presents the best option, any you may just have to bite the bullet when it comes to your intuitions.
Apparently retracting doesn’t work the way I thought. Oops.