There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.
Well, for one thing, if I’m unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can’t do that. That sure does sound like the sort of work I’d want a moral theory to do.
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don’t care about the world as much as yourself?
I don’t need a theory to decide I’m unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.
it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs
Yes, both of those seem fairly likely.
It sounds like you’re suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent… have I understood you correctly? If so, can you say more about why you believe those things?
An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.
Huh? So your view of a moral theory is that it ranks your options, but there’s no implication that a moral agent should pick the best known option?
What purpose does such a theory serve? Why would you classify it as a “moral theory” rather than “an interesting numeric excercise”?
There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.
Well, for one thing, if I’m unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can’t do that. That sure does sound like the sort of work I’d want a moral theory to do.
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don’t care about the world as much as yourself?
I don’t need a theory to decide I’m unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.
Yes, both of those seem fairly likely.
It sounds like you’re suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent… have I understood you correctly? If so, can you say more about why you believe those things?
An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.