There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career… familiar kind of moral dilemma. Asking his colleague for advice, he got told “Just maximise total utility.” “Come on,” he is supposed to have replied, “this is serious!”
I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
Sure, but “costs” and “benefits” are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.
In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don’t necessarily get much value (doesn’t really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
Writing down costs and benefits is clearly an application of consequentialism ethics.
No, because “costs” and “benefits” are value-laden terms.
Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city’s ruler has forbidden it. So I take your advice and write down costs and benefits. Costs—breaching my duty to obey the law, punishment for me, possible reigniting of the city’s civil war. Benefits—upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven’t committed to any ethical system, all I’ve done is clarify what’s at stake. For example, if I’m a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I’m a virtue ethicist, perhaps this shows it’s about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I’m just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I’ll experience knowing my brother’s corpse is being eaten by crows?
Ironically, the only person this doesn’t help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits—“maximise utility” is a slogan, not a procedure.
What are you arguing here? First you argue that “just maximize utility” is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.
Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one’s attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.
There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career… familiar kind of moral dilemma. Asking his colleague for advice, he got told “Just maximise total utility.” “Come on,” he is supposed to have replied, “this is serious!”
I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
Sauce: http://lesswrong.com/lw/890/rationality_quotes_november_2011/5aq7
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
Writing out costs and benefits is a technique that is sometimes helpful.
Sure, but “costs” and “benefits” are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.
In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don’t necessarily get much value (doesn’t really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
No, because “costs” and “benefits” are value-laden terms.
Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city’s ruler has forbidden it. So I take your advice and write down costs and benefits. Costs—breaching my duty to obey the law, punishment for me, possible reigniting of the city’s civil war. Benefits—upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven’t committed to any ethical system, all I’ve done is clarify what’s at stake. For example, if I’m a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I’m a virtue ethicist, perhaps this shows it’s about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I’m just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I’ll experience knowing my brother’s corpse is being eaten by crows?
Ironically, the only person this doesn’t help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits—“maximise utility” is a slogan, not a procedure.
What are you arguing here? First you argue that “just maximize utility” is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.
Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one’s attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.
Are you arguing anything else?