There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.
There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.