The actual weight you have to accept is defined by whatever moral standard you accept for yourself.
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he’s producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%.
If you avoid utilitarianism you can describe being “better than” the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.
Pretty much any such moral standard says that you must be better than him
Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn’t be that surprising—really, it is just the claim that utilitarianism may not align well with our intuitions.
Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If your goal in a moral theory is a way of quantifying your intuitions about morality, then by all means use a different approach. On the other hand, if your goal is to reason about actions in terms of their cumulative impact on the world around you, then utilitarianism presents the best option, any you may just have to bite the bullet when it comes to your intuitions.
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he’s producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%.
If you avoid utilitarianism you can describe being “better than” the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.
Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn’t be that surprising—really, it is just the claim that utilitarianism may not align well with our intuitions.
Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If your goal in a moral theory is a way of quantifying your intuitions about morality, then by all means use a different approach. On the other hand, if your goal is to reason about actions in terms of their cumulative impact on the world around you, then utilitarianism presents the best option, any you may just have to bite the bullet when it comes to your intuitions.
Apparently retracting doesn’t work the way I thought. Oops.