(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I’d be extremely curious to hear it.)
I asked about this before in the context of one of Julia Galef’s posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?
The only reasonable attempt at a response in that sub-thread is this comment. I don’t think the argument works, though. The problem is not just disagreement between different people’s intuitions, but also the fact that humans don’t do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons.
That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.
There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career… familiar kind of moral dilemma. Asking his colleague for advice, he got told “Just maximise total utility.” “Come on,” he is supposed to have replied, “this is serious!”
I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
Sure, but “costs” and “benefits” are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.
In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don’t necessarily get much value (doesn’t really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
Writing down costs and benefits is clearly an application of consequentialism ethics.
No, because “costs” and “benefits” are value-laden terms.
Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city’s ruler has forbidden it. So I take your advice and write down costs and benefits. Costs—breaching my duty to obey the law, punishment for me, possible reigniting of the city’s civil war. Benefits—upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven’t committed to any ethical system, all I’ve done is clarify what’s at stake. For example, if I’m a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I’m a virtue ethicist, perhaps this shows it’s about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I’m just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I’ll experience knowing my brother’s corpse is being eaten by crows?
Ironically, the only person this doesn’t help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits—“maximise utility” is a slogan, not a procedure.
What are you arguing here? First you argue that “just maximize utility” is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.
Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one’s attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.
I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness.
Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it’s really not that big of a deal.
In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.
If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone’s utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people’s original scale X may have positive, negative, or zero value.
The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P’s own preferences, and this cannot be fixed because without preferences there could be no empathy.
I asked about this before in the context of one of Julia Galef’s posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?
The only reasonable attempt at a response in that sub-thread is this comment. I don’t think the argument works, though. The problem is not just disagreement between different people’s intuitions, but also the fact that humans don’t do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons.
That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.
Intuitive utilitarian ethics are very helpful in everyday life.
There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career… familiar kind of moral dilemma. Asking his colleague for advice, he got told “Just maximise total utility.” “Come on,” he is supposed to have replied, “this is serious!”
I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
Sauce: http://lesswrong.com/lw/890/rationality_quotes_november_2011/5aq7
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
Writing out costs and benefits is a technique that is sometimes helpful.
Sure, but “costs” and “benefits” are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.
In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don’t necessarily get much value (doesn’t really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
No, because “costs” and “benefits” are value-laden terms.
Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city’s ruler has forbidden it. So I take your advice and write down costs and benefits. Costs—breaching my duty to obey the law, punishment for me, possible reigniting of the city’s civil war. Benefits—upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven’t committed to any ethical system, all I’ve done is clarify what’s at stake. For example, if I’m a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I’m a virtue ethicist, perhaps this shows it’s about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I’m just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I’ll experience knowing my brother’s corpse is being eaten by crows?
Ironically, the only person this doesn’t help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits—“maximise utility” is a slogan, not a procedure.
What are you arguing here? First you argue that “just maximize utility” is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.
Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one’s attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.
Are you arguing anything else?
Could you provide some concrete examples?
I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness.
Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it’s really not that big of a deal.
In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.
http://lesswrong.com/lw/b4f/sotw_check_consequentialism/
Thanks for the link, I am very underwhelmed too.
If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone’s utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people’s original scale X may have positive, negative, or zero value.
The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P’s own preferences, and this cannot be fixed because without preferences there could be no empathy.