Does utilitarianism “require” extreme self sacrifice? If not why do people commonly say it does?
Chist Hallquist wrote the following in an article (if you know the article please, please don’t bring it up, I don’t want to discuss the article in general):
“For example, utilitarianism apparently endorses killing a single innocent person and harvesting their organs if it will save five other people. It also appears to imply that donating all your money to charity beyond what you need to survive isn’t just admirable but morally obligatory. ”
The non-bold part is not what is confusing me. But where does the “obligatory” part come in. I don’t really how its obvious what, if any, ethical obligations utilitarianism implies. given a set of basic assumptions utilitarianism lets you argue whether one action is more moral than another. But I don’t see how its obvious which, if any, moral benchmarks utilitarianism sets for “obligatory.” I can see how certain frameworks on top of utilitarianism imply certain moral requirements. But I do not see how the bolded quote is a criticism of the basic theory of utilitarianism.
However this criticism comes up all the time. Honestly the best explanation I could come up with was that people were being unfair to utilitarianism and not thinking through their statements. But the above quote is by HallQ who is intelligent and thoughtful. So now I am genuinely very curious.
Do you think utilitarianism really require such extreme self sacrifice and if so why? And if it does not require this why do so many people say it does? I am very confused and would appreciate help working this out.
edit:
I am having trouble asking this question clearly. Since utilitarianism is probably best thought of as a cluster of beliefs. So its not clear what asking “does utilitarianism imply X” actually means. Still I made this post since I am confused. Many thoughtful people identity as utilitarian (for example Ozy and theunitofcaring) yet do not think people have extreme obligations. However I can think of examples where people do not seem to understand the implications of their ethical frameowrks. For example many Jewish people endorse the message of the following story:
Rabbi Hilel was asked to explain the Torah while standing on one foot and responded “What is hateful to you, do not do to your neighbor. That is the whole Torah; the rest is the explanation of this—go and study it!”
The story is presumably apocryphal but it is repeated all the time by Jewish people. However its hard to see how the story makes even a semblance of sense. The torah includes huge amounts of material that violates the “golden Rule” very badly. So people who think this story gives even a moderately accurate picture of the Torah’s message are mistaken imo.
My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things ‘permissible’ and ‘impermissible’, and utilitarianism doesn’t natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum ‘true’ and everything else false, but that doesn’t give a realistically human-followable result. Some philosophers have worked on ‘satisficing consequentialism’, which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.
There’s some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.
A useful word here is “supererogation”, but this still implies that there’s a baseline level of duty, which itself implies that it’s possible even in principle to calculate a baseline level of duty.
There may be cultural reasons for the absence of the concept: some Catholics have said that Protestantism did away with supererogation entirely. My impression is that that’s a one-line summary of something much more complex (though possibly with potential toward the realization of the one-line summary), but I don’t know much about it.
Supererogation was part of the moral framework that justified indulgences. The idea was that the saints and the church did lots of stuff that was above and beyond the necessary amounts of good (and God presumably has infinitely deep pockets if you’re allowed to tap Him for extra), and so they had “credit left over” that could be exchanged for money from rich sinners.
The protestants generally seem to have considered indulgences to be part of a repugnant market and in some cases made explicit that the related concept of supererogation itself was a problem.
In Mary at the Foot of the Cross 8: Coredemtion as Key to a Correct Understanding of Redemption on page 389 there is a quick summary of a Lutheran position, for example:
The setting of the “zero point” might in some sense be arbitrary… a matter of mere framing. You could frame it as people already all being great, but with the option to be better. You could frame it as having some natural zero around the point of not actively hurting people and any minor charity counting as a bonus. In theory you could frame it as everyone being terrible monsters with a minor ability to make up a tiny part of their inevitable moral debt. If it is really “just framing” then presumably we could fall back to sociological/psychological empiricism, and see which framing leads to the best outcomes for individuals and society.
On the other hand, the location of the zero level can be absolutely critical if we’re trying to integrate over a function from now to infinity and maximize the area under the curve. SisterY’s essay on suicide and “truncated utility functions” relies on “being dead” having precisely zero value for an individual, and some ways of being alive having a negative value… in these cases the model suggests that suicide and/or risk taking can make a weird kind of selfish sense.
If you loop back around to the indulgence angle, one reading might be that if someone sins then they are no longer perfectly right with their local community. In theory, they could submit to a little extra hazing to prove that they care about the community despite transgressing its norms. In this case, the natural zero point might be “the point at which they are on the edge of being ostracized”. If you push on that, the next place to look for justifications would focus on how ostracism and unpersoning works, and perhaps how it should work to optimize for whatever goals the community nominally or actually exists to achieve.
I have my own pet theories about how to find “natural zeros” in value systems, but this comment is already rather long :-P
I think my favorite insight from the concept of supererogation is the idea that carbon offsets are in some sense “environmental indulgences”, which I find hilarious :-)
Please, do tell, that sounds very interesting.
It seems to me that systems that put “zero point” very high rely a lot on something like extrinsic motivation, whereas systems that put “zero point” very low rely mostly on intrinsic motivation.
In addition to that, if you have 1000 euros, and you desperately need to have 2000 and you play a game where you have to bet on a result of a coin toss, then you maximize your probability of ever reaching that sum by going all in. Whereas if you have 1000 and need to stay above 500, then you place your bets as conservatively as possible. Perhaps putting zero very high encourages “all in” moral gambles, encouraging unusual acts that might have high variance of moral value (if they succeed to achieve high moral value, they are called heroic acts)? Perhaps putting zero very low encourages playing conservatively, doing a lot of small acts instead of one big heroic act.
The word may have fallen out of favor, but I think the concept of “good, but not required” is alive and well in almost all folk morality. It’s troublesome for (non-divine-command) philosophical approaches because you have to justify the line between ‘obligation’ and ‘supererogation’ somehow. I suspect the concept might sort of map onto a contractarian approach by defining ‘obligatory’ as ‘society should sanction you for not doing it’ and ‘supererogatory’ as ‘good but not obligatory’, though that raises as many questions as it answers.
Huh? So your view of a moral theory is that it ranks your options, but there’s no implication that a moral agent should pick the best known option?
What purpose does such a theory serve? Why would you classify it as a “moral theory” rather than “an interesting numeric excercise”?
There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.
Well, for one thing, if I’m unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can’t do that. That sure does sound like the sort of work I’d want a moral theory to do.
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don’t care about the world as much as yourself?
I don’t need a theory to decide I’m unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.
Yes, both of those seem fairly likely.
It sounds like you’re suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent… have I understood you correctly? If so, can you say more about why you believe those things?
An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.
You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes.
If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people’s limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else’s to make a dent in the rigours to which it calls you.
I’m not sure you can really say it’s a ‘misuse’ if it’s how Bentham used it. He is essentially the founder of modern utilitarianism. If any use is a misuse, it is scalar utilitarianism. (I do not think that is a misuse either).
Fair point… I think the way I see it is that Bentham discovered the core concept of utilitarianism and didn’t build quite the right structure around it. My intention is to make ethical/metaethical claims, not historical/semantic ones… does that make sense?
(It’s true I haven’t offered a detailed counterargument to anyone who actually supports the maximizing version; I’m assuming in this discussion that its demandingness disqualifies it)
It might be useful to distinguish between a “moral theory” which can be used to compare the morality of different actions and a “moral standard” which is a boolean rule use to determine what is morally ‘permissible’ and what is morally ‘impermissible’.
I think part of the point your post makes is that people really want a moral standard, not a moral theory. I think that makes sense; with a moral system, you have a course of action guaranteed to be “good”, whereas a moral theory makes no such guarantee.
Furthermore, I suspect that the commonly accepted societal standard is “you should be as moral as possible”, which means that a moral theory is translated into a moral standard by treating the most moral option as “permissible” and everything else as “impermissible”. This is exactly what occurs in the text quoted by OP; it takes the utilitarian moral system and projects it on a standard according to which only the most moral option is permissible, making it obligatory.
It basically depends whether you’re a maximising utilitarian or a scalar utilitarian. The former says that you should do the best thing. The latter is less harsh in that it just says that better actions are better without saying that you necessarily have to do the best one.
Thanks for the link. I like your terminology better than mine. :)
The main difference between a utility function based approach is that there is no concept of “sufficient effort”. Every action gets an (expected) utility attached to it. Sending £10 to an efficient charity is X utilitons above not doing so; but selling everything you own to donate to the charity is (normally) even higher.
So I think the criticism is accurate, in that humans almost never achieve perfection following utility; there’s always room for more effort, and there’s no distinction between actions that are “allowed” versus “required” (as other ethical systems sometimes have). So for certain types of mind (perfectionists or those with a high desire for closure), a utility function based morality demands ever more: they can never satisfy the requirements of their morality. Those who are content with “doing much better” rather than “doing the absolute best” won’t find it so cripling.
Or, put more simply, for a utility function based approach, no one is going to (figuratively) hand you a medal and say “well done, you’ve done enough”. Some people see this as being equivalent with “you’re obliged to do the maximum.”
I thought about this question a while ago and have been meaning to write about it sometime. This is a good opportunity.
Terminology: Other commenters are pointing out that there are differing definitions of the word “utilitarianism”. I think it is clear that the article in question is talking about utilitarianism as an ethical theory (or rather, a family of ethical theories). As such, utilitarianism is a form of consequentialism, the view that doing “the right thing” is what produces the best state of affairs. Utilitarianism is different from other forms of consequentialism in that the thing people consider good/valuable/worth achieving is directly tied to conscious beings. An example of a non-utilitarian consequentialist theory would be the belief that knowledge is the most important thing, and that we should all strive to advance science (at all costs).
In regard to the question, there are two interesting points that are immediately worth pointing out:
1) Utilitarianism (and any sort of consequentialism), if it is indeed demanding, is only demanding in certain empirical situations. If the world is already perfect, you don’t have to do anything! 2) For every consequentialist view, there are empirical situations where achieving the best consequences is extremely demanding. Just imagine that the desired state of affairs is really hard to attain.
So my first reply to people who criticise utilitarianism for it being too demanding is the following: Yes, it’s very unfortunate that the world is so messed up, but it’s not the fault of the utilitarians!
Further, the quoted statement in bold speaks of certain actions being “not just admirable, but morally obligatory”. I find this framing misleading. I believe that people should taboo words like “morally obligatory” in ethical discussions. It makes it seem like there is some external moral standard that humans are supposed to obey, but what would it be, and more importantly, why should we care? In my disclaimer on terminology, I wrote that I’m referring to utilitarianism as an ethical theory. I don’t intend this to mean that utilitarians are committed to the claim that there are universally valid “ethical truths”. I would define “utilitarian” as: “Someone who would voluntarily take a pill that turns them into a robot that goes on to perfectly maximize expected utility”. With “utility” being defined as “world-states that are good for sentient individuals”, with “good” being defined in non-moral terms, depending on which branch of utilitarianism one subscribes to (could be that e.g. preference-fulfillment is important to you, or contentment, or sum of happiness minus suffering). According to this interpretation, a utilitarian would not be committed to the view that non-utilitarian people are “making a mistake”—perhaps they just care about different things!
According to the meta ethical view I just sketched, which is meta ethical anti-realism, the demandingness of utilitarianism loses its scariness. If something is requested of you against your will, you’re going to object all the more if the request is more demanding. However, if you have a particular goal in life and find out that the circumstances are unfortunately quite dire, so achieving your goal will be very hard, your objection will be directed towards the state of the world, not towards your own goal (hopefully anyway, sometimes people irrationally do the other thing).
Yes, utilitarianism ranks actions according to how much expected utility they produce, and only one action will be “best”. However, it would be very misleading to apply moral terms like “only the best action is right, all the others are wrong”. Unlike deontology, where all you need to do is to not violate a set of rules, utilitarianism should be thought of as an open-ended game where you can score points, and all you try is to score the most points. Yes, there is just one best path of action, but it can still make a huge difference whether you e.g. take the fifteenth best action or the nineteenth. For utilitarians, moral praise is merely instrumental: They want to blame and praise people in a way that produces the best outcome. This includes praising people for things that are less than perfect, for instance.
So in part, the demandingness objection against utiltiarianism relies on an uncharitable interpretation/definition of “utilitarianism”, which commits utilitarians to believe in moral realism. (I consider this interpretation uncharitable because I think the entire concept of “moral realism” is, like libertarian free will, a confused idea that cannot be defined in clear terms without losing at least part of the connotations we intuitively considered important.
Another reason why I think the demandingness objection is a bad objection is because people usually apply it in a naive, short-sighted way. The author of the quote in question did so, for instance: “It also appears to imply that donating all your money to charity beyond what you need to survive (…)” This is wrong. It only implies donating all your money to charity beyond what you need to be maximally productive in the long run. Empirical studies show that being poor decreases the quality of your decision-making. Further, putting too much pressure on yourself often leads to burnout, which leads to a significant loss of productivity in the long run. I find that people tend to overestimate how demanding a typical utilitarian life is. But they are right insofar as there could be situations where trying to achieve the utilitarian goal results in significant self-sacrifice. Such situations are definitely logically possible, but I think they are much more rare than people think.
The reason this is the case is because people tend to conflate “trying to act like a perfectly rational, super-productive utilitarian robot would act” and “trying to maximise expected utility given all your personal constraints”. Utilitarianism implies the latter, not the former. Utilitarianism refers to desiring a specific overall outcome, not to a specific decision-procedure for every action you are taking. It is perfectly in line with utilitarianism to come to a conclusion such as: “My personality happens to be such that thinking about all the suffering in the world every day is just too much for me, I literally couldn’t keep it up for more than two months. I want to make a budget for charity once every year, I donate what’s in that budget, and for the rest of the time, I try to not worry much about it.” If it is indeed the case that doing things differently will lead to this person giving up the entire endeavour of donating money, then this is literally the best thing to do for this person. Humans need some degree of happiness and luxury if they want to remain productive and clear-headed in the long run.
The whole thing is also extremely person-dependent. For some people, “trying to maximise expected utility given all your personal constraints” will look more like “trying to act like a perfectly rational, super-productive utilitarian robot would act” than for other people. Some people are just naturally better at achieving a goal than other people, this depends on both the goals and on the personality traits and assets of the person in question.
Finally, let’s ask whether “trying to maximise expected utility given all your personal constraints” will, on average, given real-world circumstances, prove to be demanding or not. I suggest to define “demanding” as follows: goal A is more demanding than goal B if people who try to rationally achieve A have a lower average happiness across a time period than people who try to rationally achieve goal B. If you were to empirically measure this, I would suggest contacting people at random times during day or night to ask them to report how they are feeling at this very moment. When it comes to momentary happiness, it is trivial that trying to maximise your momentary happiness will lead to you being happier than trying to be utilitarian. Utilitarians might object, citing the paradox of hedonism: When people only focus on their own personal happiness, their life will soon feel sad. However, this would be making the exact same mistake I discussed earlier. If it is truly the case that explicitly focusing on your personal happiness makes you miserable, then of course the rational thing to do for a person with this goal would be to self-modify and convince yourself to follow a different goal.
There is a distinction between the experiencing self and the remembering self, which is why it would be a completely different question to ask people “how happy are you with your life on the whole”. For instance, I read somewhere that mothers (compared to women without children) tend to be less happy in the average moment, but more happy with their life as a whole. What is it that you care about more? I would assume that people are happy with their life on the whole if they know what they want in life, if they think they made good choices regarding the goals that they have, and if they got closer to their goals more and more. At least for the first part of this, knowing what you want in life, utilitarianism does very well.
I’m seeing fundamental disagreement on what “moral” means.
In the Anglo Saxon tradition, what is moral is what you should or ought to do, where should and ought both entail a debt one has the obligation to pay. Note that this doesn’t make morality binary; actions are more or less moral depending on how much of the debt you’re paying off. I wouldn’t be surprised if this varied a lot by culture, and I invite people to detail the similarities and differences in other cultures they are familiar with.
What I hear from some people here is Utilitarianism as a preference for certain states of the world, where there is no obligation to do anything—action to bring about those states is optional.
I think in the Anglo Saxon tradition, actions which fulfill preferences but are not obligatory would be considered praiseworthy or benevolent. Perhaps people would call them moral in terms of more than paying off your debt, but failing to “pay extra” would not be considered immoral.
Let’s call people who view morality as what is obligatory as Moralos, and people who view morality as what is preferable as Moralps.
Moralos will view Moralps as unjustly demanding and completely hypocritical—demanding payments on a huge debt, but only making tiny payments, if any, toward those debts themselves. Moralps will view Moralos as pretty much hateful—they don’t even prefer a better world, they want it to be worse.
This looks very familiar to me.
Haidt should really add questions to his poll to get at just what morality means to people, in particular in terms of obligation.
This makes sense… and the idea of ‘praiseworthy/benevolent’ shows that Moralos do have the concept of a full ranking.
So we could look at this as Moralos having a ranking plus an ‘obligation rule’ that tells you how good an outcome you’re obligated to achieve in a given situation, while Moralps don’t accept such a rule and instead just play it by ear.
Justifying an obligation rule seems philosophically tough… unless you justify it as a heuristic, in which case you get to think like a Moralp and act like a Moralo, and abandon your heuristic if it seems like it’s breaking down. Taking Giving What We Can’s 10% pledge is a good example of adopting such a heuristic.
Maybe, but it’s a very common moral intuition, so anything that purports to be a theory of human morality ought to explain it, or at least explain why we would misperceive that the distinction between obligatory and praiseworthy-but-non-obligatory actions exists.
Is heuristic value not a sufficient explanation of the intuition?
I don’t see the heuristic value. We don’t perceive people as being binarily e.g. either attractive or unattractive, friendly or unfriendly, reliable or unreliable; even though we often had to make snap judgements about these attributes, on matters of life and death, we still perceive them as being on a sliding scale. Why would moral vs. immoral be different?
It’d be fairer to compare to other properties of actions rather than properties of people; I think moral vs. immoral is also a sliding scale when applied to people.
That said, we do seem more attached to the binary of moral vs. immoral actions than, say, wise vs. unwise. My first guess is that this stems from a desire to orchestrate social responses to immoral action. From this hypothesis I predict that binary views of moral/immoral will be correlated with coordinated social responses to same.
Interesting; that may be a real difference in our intuitions. My sense is that unless I’m deliberately paying attention I tend to think of people quite binarily as either decent people or bad people.
Significantly more than you think of them binarily regarding those other categories? Then it is a real difference.
My view of people is that there are a few saints and a few cancers, and a big decent majority in between who sometimes fall short of obligations and sometimes exceed them depending on the situation. The ‘saint’ and ‘cancer’ categories are very small.
What do your ‘good’ and ‘bad’ categories look like, and what are their relative sizes?
I think of a large population of “decent”, who generically never do anything outright bad (I realise this is probably inaccurate, I’m talking about intuitions). There’s some variation within that category in terms of how much outright good they do, but that’s a lot less important. And then a smaller but substantial chunk, say 10%, of “bad” people, people who do outright bad things on occasion (and some variation in how frequently they do them, but again that’s much less important).
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I don’t think my concept of obligation is mysterious:
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations: You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
That’s almost rule consequentialism.
What buybuy said. Plus… Moralps are possibly hypocritical, but it could be that they are just wrong, claiming one preference but acting as if they have another. If I claim that I would never prefer a child to die so that I can buy a new car, and I then buy a new car instead of sending my money to feed starving children in wherever, then I am effectively making incorrect statements about my preferences, OR I am using the word preferences in a way that renders it uninteresting. Preferences are worth talking about precisely because to the extent that they describe what people will actually do.
I suspect in the case of starving children and cars, my ACTUAL preference is much more sentimental and much less universal. If I came home one day and laying on my lawn was a starving child, I would very likely feed that child even if this food came from a store I was keeping to trade for a new car. But if this child is around the corner and out of my sight, then its Tesla S time!
So Moralps are possibly hypocritical, but certainly wrong at describing their own preferences, IF we insist that preferences are things that dictate our volition.
Utilitarianism talks about which actions are more moral. It doesn’t talk about which actions a person actually “prefers.” I think its more moral to donate 300 dollars to charity than to take myself and two friends out for a Holiday diner. Yet I have reservations for Dec 28th. The fact I am actually spending the money on my friends and myself doesn’t mean I think this is the most moral things I could be doing.
I have never claimed people are required to optimize their actions in the pursuit of improving the world. So why would it be hypocritical for me not to try to maximize world utility.
So you are saying: “the right thing to do is donate $300 to charity but I don’t see why I should do that just because I think it is the right thing to do.”
Well once we start talking about the right thing to do without attaching any sense of obligation to doing that thing, I’d like to know what is the point about talking about morality at all. It seems it just becomes another way to say “yay donating $300!” and has no more meaning than that.
What I thought were the accepted definitions of the words, saying the moral thing to do is to donate $300 was the same as saying I ought to donate $300. In this definition, discussions of what was moral and what was not really carried more weight than just saying “yay donating $300!”
I didn’t say it was “the right thing” to do. I said it was was moral then what I am actually planning to do. You seem to just be assuming people are required to act in the way they find most moral. I don’t think this is a reasonable thing to ask of people.
Utilitarian conclusions clearly contain more info than “yay X.” Since they typically allow one to compare different positive options as to which is more positive. In addition in many contexts utilitarianism gives you a framework for debating what to do. Many people will agree the primary goal of laws in the USA should be to maximize utility for US citizens/residents as long as the law won’t dramatically harm non-residents (some libertarians disagree but I am just making a claim on what people think). Under these conditions utilitarianism tells you what to do.
Utilitarianism does not tell you how to act in daily life. Since its unclear how much you should weigh the morality of an action against other concerns.
A moral theory that doesn’t tell you how to act in daily life seems incomplete, at least in comparison to e.g. deontological approaches. If one defines a moral framework as something that does tell you how to act in daily life, as I suspect many of the people you’re thinking of do, then to the extent that utilitarianism is a moral framework, it requires extreme self-sacrifice (because the only, or at least most obvious, way to interpret utilitarianism as something that does tell you how to act in daily life is to interpret it as saying that you are required to act in the way that maximizes utility).
So on some level it’s just an argument about definitions, but there is a real point: either utilitarianism requires this extreme self-sacrifice, or it is something substantially less useful in daily life than deontology or virtue ethics.
Preferences of this sort might be interesting not because they describe what their holders will do themselves, but because they describe what their holders will try to get other people to do. I might think that diverting funds from luxury purchases to starving Africans is always morally good but not care enough (or not have enough moral backbone, or whatever) to divert much of my own money that way—but I might e.g. consistently vote for politicians who do, or choose friends who do, or argue for doing it, or something.
Your comment reads to me like a perfect description of hypocrisy. Am I missing something?
Nope. Real human beings are hypocrites, to some extent, pretty much all the time.
But holding a moral value and being hypocritical about it is different from not holding it at all, so I don’t think it’s correct to say that moral values held hypocritically are uninteresting or meaningless or anything like that.
“Utilitarianism” for many people includes a few beliefs that add up to this requirement.
1) Utility of all humans is more-or-less equal in importance.
2) it’s morally required to make decisions that maximize total utility.
3) there is declining marginal utility for resources.
Item 3 implies that movement of wealth from someone who has more to someone who has less increases total utility. #1 means that this includes your wealth. #2 means it’s obligatory.
Note that I’m not a utilitarian, and I don’t believe #1 or #2. Anyone who actually does believe these, please feel free to correct me or rephrase to be more accurate.
This sounds like preference utilitarianism, the view that what matters for a person is the extent to which her utility function (“preferences”) is fulfilled. In academic ethics outside of Lesswrong, “utilitarianism” refers to a family of ethical views, of which the most commonly associated one is Bentham’s “classical utilitarianism”, where “utility” is very specifically defined as “suffering minus happiness” that a person experiences over time.
I’m not seeing where in Dagon’s comment they indicate preference utilitarianism vs (ex) hedonic?
I see what you mean. Why I thought he meant preference:
1) talks about “utility of all humans”, whereas a classical utilitarian would more likely have used something like “well-being”. However, you can interpret is as a general placeholder for “whatever matters”.
3) is also something that you mention in economics usually, associated with preference-models. Here again, it is true that diminishing marginal utility also applies for classical utilitarianism.
I know of many people who endorse claims 1 and 3. But I know of no one who claims to believe 2. Am I just misinformed about people’s beliefs? Lesswrong is well known for being connected to utilitarianism. Do any prominent lesswrongers explicitly endorse 2?
edit:
My point was I know many people who endorse something like the view in this comment:
2′) One decision is morally better than another if it yields greater expected total utility.
Then you don’t know any utilitarians. Without 2, you don’t have a moral theory.
La Wik:
I think someone is still a utilitarian if instead of 2 they believe something like
2′) One decision is morally better than another if it yields greater expected total utility.
(In particular, I don’t think it’s necessary for a moral theory to be based on a notion of moral requirement as opposed to one of moral preference.)
Um, what’s the difference?
It’s possible to believe some action is morally better than another without feeling it’s required of you to do it.
As ZankerH said, it leaves out the “required to make” part. Also, gjm’s particular formulation of 2′ makes a statement about comparisons between two given decisions, not a statement about the entire search space of possible decisions.
Exactly what ZankerH and DaFranker said. You could augment a theory consisting of 1, 2′, and 3 with further propositions like “It is morally obligatory to do the morally best thing you can on all occasions” or (after further work to define the quantities involved) less demanding ones like “It is morally obligatory to act so as not to decrease expected total utility” or “It is morally obligatory to act in a way that falls short of the maximum achievable total utility by no more than X”. Or you could stick with 1,2′,3 and worry about questions like “what shall I do?” and “is A morally better than B?” rather than “is it obligatory to do A?”. After all, most of the things we do (even ones explicitly informed by moral considerations) aren’t simply a matter of obeying moral obligations.
If you don’t use the “required to make” part, then if you tell me “you should do ___ to maximize utility” I can reply “so what?” It can be indistinguishable, in terms of what actions it makes me take, from not being a utilitarian.
Furthermore, while perhaps I am not obligated to maximize total utility all the time, it’s less plausible that I’m not obligated to maximize it to some extent—for instance, to at least be better at utility than someone we all think is pretty terrible, such as a serial killer. And even that limited degree of obligation produces many of the same problems as being obligated all the time. For instance, we typically think a serial killer is pretty terrible even if he gives away 90% of his income to charity. Am I, then, obliged to be better than such a person? If 20% of his income saves as many lives as are hurt by his serial killing, and if we have similar incomes, that implies I must give away at least 70% of my income to be better than him.
If I tell you “you are morally required to do X”, you can still reply “so what?”. One can reply “so what?” to anything, and the fact that a moral theory doesn’t prevent that is no objection to it.
(But, for clarity: what utilitarians say and others don’t is less “if you want to maximize utility, do ” than “you should do because it maximizes utility”. It’s not obvious to me which of those you meant.)
A utilitarian might very well say that you are—hence my remark that various other “it is morally obligatory to …” statements could be part of a utilitarian theory. But what makes a theory utilitarian is not its choice of where to draw the line between obligatory and not-obligatory, but the fact that it makes moral judgements on the basis of an evaluation of overall utility.
I think it will become clear that this argument can’t be right if you consider a variant in which the serial killer’s income is much larger than yours: the conclusion would then be that nothing you can do can make you better than the serial killer. What’s gone wrong here is that when you say “a serial killer is terrible, so I have to be better than he is” you’re evaluating him on a basis that has little to do with net utility, whereas when you say “I must give away at least 70% of my income to be better than him” you’re switching to net utility. It’s not a big surprise if mixing incompatible moral systems gives counterintuitive results.
On a typical utilitarian theory:
the wealthy serial killer is producing more net positive utility than you are
he is producing a lot less net positive utility than he could by, e.g., not being a serial killer
if you tried to imitate him you’d produce a lot less net positive utility than you currently do
and the latter two points are roughly what we mean by saying he’s a very bad person and you should do better. But the metric by which he’s very bad and you should do better is something like “net utility, relative to what you’re in a position to produce”.
But for the kind of utilitarianism you’re describing, if you tell me “you are morally required to do X”, I can say “so what” and be correct by your moral theory’s standards. I can’t do that in response to anything.
What do you mean by “correct”?
Your theory does not claim I ought to do something different.
It does claim something else would be morally better. It doesn’t claim that you are obliged to do it. Why use the word “ought” only for the second and not the first?
Because that is what most English-speaking human beings mean by “ought”.
It doesn’t seem that way to me. It seems to me that “ought” covers a fairly broad range of levels of obligation, so to speak; in cases of outright obligation I would be more inclined to use “must” than “ought”.
I don’t think that saves it. In my scenario, me and the serial killer have similar incomes, but he kills people, and he also gives a lot of money to charity. I am in a position to produce what he produces.
Which means that according to strict utilitarianism you would do better to be like him than to be as you are now. Better still, of course, to do the giving without the mass-murdering.
But the counterintuitive thing here isn’t the demandingness of utilitarianism, but the fact that (at least in implausible artificial cases) it can reckon a serial killer’s way of life better than an ordinary person’s. What generates the possibly-misplaced sense of obligation is thinking of the serial killer as unusually bad when deciding that you have to do better, and then as unusually good when deciding what it means to do better. If you’re a utilitarian and your utility calculations say that the serial killer is doing an enormous amount of good with his donations, you shouldn’t also be seeing him as someone you have to do more good than because he’s so awful.
What generates the sense of obligation is that the serial killer is considered bad for reasons that have nothing to do with utility, including but not limited to the fact that he kills them directly (rather than using a computer, which contributes to global warming, which hurts people) and actively (he kills people rather than keeping money that would have saved their life). The charity-giving serial killer makes it obvious that the utilitarian assumption that more utility is better than less utility just isn’t true, for what actual human beings mean by good and bad.
I claim to believe 2! I think that we do have lots of moral obligations, and basically nobody is satisfying all of them. It probably isn’t helpful to berate people for not meeting all of their moral obligations (since it’s really really hard to do so, and berating people isn’t likely to help), and that there is room to do better and worse even when we don’t meet our moral obligations, but neither of these facts mean that we don’t have a moral obligation to maximise expected moral-utility.
If you want to completely optimize your life for creating more global utilons then, yes, utilitarianism requires extreme self-sacrifice. The time you spend playing that video-game or hanging out with friends netted you utility/happiness, but you could have spend that time working and donating the money to an effective charity. That tasty cheese you ate probably made you quite happy, but it didn’t maximize utility. Better switch to the bare minimum you need to work the highest-paying job you can manage and give all the money you don’t strictly need to an effective charity.
Of course, humans (generally) can’t manage that. You won’t be able to function at a high-paying job if you can’t occasionally indulge in some tasty food or if your Fun-bar is in the red all the time. (Or, for that matter, most of your other bars. You’ll probably spend a lot of time lying on the floor crying if you live like this.
While it might be morally optimal for you to ignore your own needs and work on the biggest gains you can manage, this isn’t something that can be required of (most) people. You can use utilitarianism as a framework to base you decisions on without giving up everything. Giving up 100% of your income to a good charity might be morally optimal, but [giving 10% still makes a huge impact[(https://www.givingwhatwecan.org) and allows you a comfortable life yourself.
I don’t think being perfectly utilitarian is something (most) humans should strive for. Use it as guidelines to influence the world around you, but don’t let it drive you crazy.
Or to quote someone on skype::
People generally don’t manage that. People learn what they can and can’t do in Ranger School.
This is another case where it just seems there are multiple species of homo sapiens. Or maybe I’m just a Martian.
When other people say “X is moral”, they mean “I will say that ‘X is moral’, and will occasionally do X”?
I can almost make sense of it, if they’re all just egoists, like me. My moral preferences are some of my many preferences. Sometimes I indulge my moral preferences, and sometimes my gustatory preferences. Moral is much like “yummy”. Just because something is “yummy”, it doesn’t I plan on eating it all day, or that I plan to eat all day.
But that is simply not my experience on how the term “moral” is generally used. Moral seems to mean “that’s my criteria to judge what I should and shouldn’t do”. That’s how everyone talks, although never quite how everyone does. Has there been an egoist revolution, and I just never realized it?
I think people have expressed before being “The Occasional Utilitarian” (my term), devoting some time slices to a particular moral theory. And other times, not. “I’m a utilitarian, when the mood strikes me”.
It reminds me of a talk I had with some gal years ago about here upcoming marriage. “Oh, we’ll never get divorced, no way, no how, but if we do...” What’s going through a person’s head when they say things like that? It’s just bizarre to me.
Years later, I was on a date at a sex show and bumped into her. She was divorced.
Knowing what is moral and acting on what is moral are two different things. Acting on what is moral is often hard, and people aren’t known for their propensity to do hard things.
The divide between “I know what is moral” and “I act on what I know to be moral” exists in most moral theories with the possible exception (as far as I know, which isn’t all that far) of egoism.
Moral, or rather immoral, can also be used to mean “should be illegal”. [*] Inasmuch as most people obey the law, there is quite a lot of morality going on. Your analysis basically states that there isn’t much Individual, supererogatory moral action going on. That’s true. People aren’t good at putting morality into practice,, which is why morality needs to buttressed by things like legal systems. But there is a lot of unflashy morality going on...trading fairly, refraining from violence and so on. So the conclusion that people are rarely moral doesn’t follow.
[*] This comment should not be taken to mean that in the opinion of the present author, everything which is illegal in every and any society is ipso facto immoral.
Can, but not necessarily should. Societies which move sufficiently far in that direction are called “totalitarian”.
And there is another too far in the other direction, although no one wants to mention that.
Why not? The dimension that we are talking about is the sync—or the disconnect—between morality and legality. If this disconnect is huge, the terms used would be “unjust” and “arbitrary”. Historically, such things happened when a society was conquered by someone with a significantly different culture.
What I was talking about was the larger but less noticeable part of the iceberg of morality.
If you, perhaps, could be more explicit..?
Moral, or rather immoral, can also be used to mean “should be illegal”. [*] Inasmuch as most people obey the law, there is quite a lot of morality going on. Your analysis basically states that there isn’t much Individual, supererogatory moral action going on. That’s true. People aren’t good at putting morality into practice,, which is why morality needs to buttressed by things like legal systems. But there is a lot of unflashy morality going on...trading fairly, refraining from violence and so on. So the conclusion that people are rarely moral doesn’t follow.
[*] This comment should not be taken to mean that in the opinion of the present author, everything which is illegal in every and any society is ipso facto immoral.
Ah, I see.
What does this mean if we taboo “illegal”?
As far as I can tell, it means something like “If you do what you shouldn’t do, someone should come around and do terrible things to you, against which you will have no recourse.”
That’s sort of true, but heavily spun. If you kill someone, what recourse do they have...except to live in a society that discourages murder by punishing murderers? Perhaps you were taking something like drug taking as a central example of “what you should not do”.
That’s a great quote! Despite its brevity it explains a big part of what I used hundreds of words to explain:)
It’s not just people in general that feel that way, but also some moral philosophers. Here are two related link about the demandingness objection to utilitarianism:
http://en.wikipedia.org/wiki/Demandingness_objection
http://blog.practicalethics.ox.ac.uk/2014/11/why-i-am-not-a-utilitarian/
The way I think of the complication is that these moral decisions are not about answering “what should I do?” but “what can I get myself to do?”
If someone on the street asks you “what is the right thing for me to do today?” you probably should not answer “donate all of your money to charity beyond what you need to survive.” This advice will just get ignored. More conventional advice that is less likely to get ignored ultimately does more for the common good.
Moral decisions that you make for yourself are a lot like giving advice. You don’t actually have perfect control over your actions. So utilitarianism demands that you give yourself the best advice that will get followed, and possibly explore strategies for giving yourself better advice.
(And telling yourself that you should feel guilty for not donating all your money to charity is just bad strategy for getting yourself to donate money to charity.)
For me utilitarianism means maximizing a weighted sum of everyone’s utility, but the weights don’t have to be equal. If you give yourself a high enough weight, no extreme self-sacrifice is necessary. The reason to be a utilitarian is that if some outcome is not consistent with it, it should be possible to make some people better off without making anyone worse off.
This is not a standard usage of the term “utilitarianism”. You can have a weighting, for example based on capacity for suffering, but you can’t weight yourself more just because you’re you and call it utilitarianism.
But if you have to give yourself and your children the same weights as strangers than almost no one is a utilitarian.
I think that there’s a difference between “nobody fulfills their moral obligations according to utilitarianism, or even tries very hard to” and “nobody believes that utilitarianism is the correct moral theory”. People are motivated by lots of things other than what they believe the correct moral theory is.
As far as I understand it, the text quoted here is implicitly relying on the social imperative “be as moral as possible”. This is where the “obligatory” comes from. The problem here is that the imperative “be as moral as possible” gets increasingly more difficult as more actions acquire moral weight. If one has internalized this imperative (which is realistic given the weight of societal pressure behind it), utilitarianism puts an unbearable moral weight on one’s metaphorical shoulders.
Of course, in reality, utilitarianism implies this degree of self-sacrifice only if you demand (possibly inhuman) moral perfection from oneself. The actual weight you have to accept is defined by whatever moral standard you accept for yourself. For example, you might decide to be at least as moral as the people around you, or you might decide to be as moral as you can without causing yourself major inconvenience, or you might decide to be as immoral as possible (though you probably shouldn’t do that, especially considering that it is probably about as difficult as being perfectly moral).
At the end of the day, utilitarianism is just a scale. What you do with that scale is up to you.
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he’s producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%.
If you avoid utilitarianism you can describe being “better than” the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.
Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn’t be that surprising—really, it is just the claim that utilitarianism may not align well with our intuitions.
Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If your goal in a moral theory is a way of quantifying your intuitions about morality, then by all means use a different approach. On the other hand, if your goal is to reason about actions in terms of their cumulative impact on the world around you, then utilitarianism presents the best option, any you may just have to bite the bullet when it comes to your intuitions.
Apparently retracting doesn’t work the way I thought. Oops.
What does the whole concept of talking about morality or human motivation using the terms of utilitarianism and consequentialism mean? It means restricting oneself to using the terms and rules that are used to derive new sentences using those terms that are used in the moral philosophy of utilitarianism and consequentialism. Once you restrict your vocabulary and the rules that are used to form sentences using this vocabulary, you usually restrict what conclusions you can derive using the terms that are in this vocabulary.
If you think in terms of consequentialism, what operations can you do? You can assign utilities to different world states (depending on the flavour of consequentialism you are using, you might have further restrictio how you can do this) and you can compare them. Or, in another version, you cannot assign the utilities directly, but you can impose a partial order binary relations on pairs of world states. That’s all. If you add something else, then you are no longer talking using just consequentialist terms. For example, take the trolley problem. Given the way the dilemma is usually described, there is not a lot of sentences that you can derive using consequentialist terms. The whole framing of the problem gives you just two world states and asks you to assign utilities to them.
Now, you can use the terms of consequentialist moral philosophy to talk about all human motivation. If your preferences satisfy certain axioms, then Von Neumann–Morgenstern utility theorem allows that. Let’s denote this way of thinking as (1).
Or you can use terms of consequentialist moral philosophy in much more restricted domain. Most people usually use those terms only to talk about things they consider to be related to morality (a question how some problems become discussed using the terms of moral philosophy and considered to be moral problems while other problems don’t is an interesting, but quite distinct question). When they talk about all human motivation, they use terms that come from outside the consequentialist moral philosophy. Let’s denote this way of thinking as (2).
Now, what do you use to describe all human motivation? Just the terms of consequentialist moral philosophy or other terms as well? Let’s compare.
and
Now, I know very little about what kind of theory of morality and human motivation you or Chris Hallquist support. Therefore, my next paragraph is based on the impressions I got reading those two quotes.
I think that your confusion comes from the fact that you think that Chris Hallquist is using the terms of consequentialist moral philosophy in pretty much the same way you do. However it seems to me that Chris Hallquist is using them in (1) way (or close to that), whereas you are closer to the (2) way of thinking. And when you think about all human motivation, then you use various terms and concepts, some of which are not from the vocabulary of consequentialism.
The very fact that you can ask such question “But where does the “obligatory” part come in. I don’t really how its obvious what, if any, ethical obligations utilitarianism implies.” implies that you are using terms that come from outside of consequentialism, because remember: in consequentialism you can only assign utilities to the world states and compare them, that’s all. The very fact that the idea that you can compare the utilities of two world states, find the utility of the world_state_1 is greater than the utility of world_state_2 and they disobey this comparison makes sense to you means that when thinking about human motivation you are using (perhaps implicitly) concepts that come from somewhere else than consequentialism [1]. There is no way you can derive disobedience using the operations of consequentialism. Therefore, if you use the terms of consequentialism to describe all human motivation ((1) way of thinking), it cannot not be obligatory. I think that Chris Hallquist is trying to implicitly convey this idea. Using (1) way of thinking (which I think Chris Hallquist is using), if your utility function assigns utilities to world states in such a way that the world states that are achievable only by donating a lot of money to charity (and not any other way) are preferable to other world states, then you are by definition motivated to donate as much money to charity as possible. Now, isn’t that a bit tautological? If you use terms such as utility function to describe all human motivation, why are such encouragements to donate to charity even needed? Wouldn’t you already be motivated to donate a lot of income to charity? I think that what a hypothetical utilitarian person who says such things (a hypothetical person whose ideas about utilitarianism Chris Hallquist is channeling) would be trying to do is to modify your de facto utility function (if we are using this term to describe and model all human motivation, assuming it’s possible) by appealing to what kind of de facto utility function you would like to have or you like to imagine yourself having. I.e. what would you like to be motivated by? The said hypothetical utilitarian person would like your motivation to be such that it could be modeled by utility function which assigns higher utilities to world states that (in this particular case) are achievable by donating a lot of money to charity.
[1] Of course, there is another possibility. That you talk about certain things using the terms such as utility function, and all your motivations (including, obviously, subconscious ones) can be modeled by a utility function, but those two are different, therefore impression of disobedience comes from the fact that the conclusions that can be modeled as derived using the second utility function are different from conclusions that you derive using the first one.
My impression is that most people who identify as utilitarians do not use terms of consequentialist moral philosophy to describe all their human motivation. They use them when they talk about problems and situations that are considered to be related to morality. For example, when they read about something and recognize it as a moral problem, they start using those terms. But their whole apparatus of human motivation (which may or may not be modeled as a utility function) is much larger than that and their utilitarianism (i.e. utility function as they are able to consciously think about it) doesn’t cover all of it, because that would be too difficult. The most you can say that they think about various situations and what should they do if they found themselves to be in them (e.g. what if you find yourself in a trolley dilemma, and others), precompute and cache the answers, and (if their memory, courage and willpower don’t fail them) perform those actions when those situations arise.
Utilitarianism doesn’t have anywhere to place a non arbitrary level of obligation except at zero and maximum effort. The zero is significant, because it means utilitarianism can’t bootstrap obligation …. I think that is the real problem, not demandingness.
As others have stated, obligation isn’t really part of utilitarianism. However, if you really wanted to use that term, one possible way to incorporate it is to ask what would the xth percentile of people do in this situation (where the people are ranked in terms of expected utility) given that everyone has the same information and use that as a boundary to the label “obligation.”
As an aside, there is a thought experiment called the “veil of ignorance.” Although it is not, strictly speaking, called utilitarianism, you can view it that way. It goes something like this: when deciding how a society should be set up, the designer should set it up as if they had no idea who they would become in the society. In this case “obligation” would probably loosely correspond to “what rules should that society have?” In this case, a utilitarian’s obligated giving rate would be something like
k*(Income—Poverty Line) where k is some number between 0 and 1 such that you maximize utility if everyone did the same.
I think you have to look at utilitarianism in a question of, “What does the best good for the greatest amount of people that is both effective and efficient?” That means that sacrifice may be a means to an end in order to achieve that greatest good for the greatest amount of people. The sacrifice is that actions that disproportionately disadvantage, objectify, or exploit people should not be taken. Those that benefit the greatest number should. Utilitarianism is all about greatest good. I don’t think moral decisions have much place anywhere outside of what harms people. I don’t think there is a moral element of utilitarianism but rather it has to do with the greatest good question. If we lived in a world where the ills were solved by that question I do not know that we would want to live in that world because it would mean the whole of humanity would exist on very little seeing as there are so many of us and much of the global economy is predicated on developed nations raping the undeveloped nations for resources, talent, and wealth. Utilitarianism by its very nature lowers all boats to a certain common stand and only raises boats when all boats can be raised which is not very often. It is a good survival strategy but may be not something to thrive on beyond survival plus some culture and a small amount of leisure.
Utilitarianism is a normative ethical theory. Normative ethical theories tell you what to do (or, in the case of virtue ethics, tell you what kind of person to be). In the specific case of utilitarianism, it holds that the right thing to do (i.e. what you ought to do) is maximize world utility. In the current world, there are many people who could sacrifice a lot to generate even more world utility. Utilitarianism holds that they should do so, therefore it is demanding.
As I understand it, and in my just-made-up-now terminology, there are two different kinds of utilitarianism, Normative, and Descriptive. In Normative, you try to figure out the best possible action and you must do that action. In Descriptive, you don’t have to always do the best possible action if you don’t want to, but you’re still trying to make the most good out of what you’re doing. For example, consider the following hypothetical actions:
Get a high-paying job and donate all of my earnings except the bare minimum necessary to survive to effective charities. (oversimplified utility: 50,000)
Get a job I enjoy and donate 10% of my earnings to effective charities. (oversimplified utility: 5,000)
Volunteer at a homeless shelter. (oversimplified utility: 500)
Buy one game for $10 and donate $10 to effective charities. (oversimplified utility: 50)
Buy two games. (oversimplified utility: 5)
Bang my head against a wall. (oversimplified utility: −5)
Normative would say that I must always pick the first action. Descriptive would say that these are some options with different utilities, and I should probably try and get one with a higher utility, but I don’t have to pick the optimal one if I don’t want to. So with Descriptive, if I didn’t feel like making my self a slave to the greater good as in the first example, but I thought I would be okay with effective tithing or volunteering, then I could do that instead and still help a lot of people. If I were in a bad place and all I could motivate myself to do was to donate $10 to an effective charity, I would still know that’s a higher utility action than buying a second game. Even a completely selfish action such as buying two games is still better than an action that is harmful, such as the oversimplified example of banging my head against a wall.
I feel that Descriptive is more practical, though theoretically Normative would take into account an agent’s motivation in determining the best possible action that the agent could take.
(This was somewhat inspired by a similar discussion on the rational side of Tumblr from maybe a few months ago, though I don’t remember exactly where. If anyone knows, please share a link.)
ETA: It seems that there are already terms for what I was trying to describe. Namely, maximising utilitarian instead of Normative, and scalar utilitarian instead of Descriptive.
The word “utilitarianism” technically means something like, “an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function”. However, when most people think of utilitarianism, they usually have a very specific utility function in mind. Taken together, the algorithm and the function do indeed imply certain “ethical obligations”, which are somewhat tautologically defined as “doing whatever maximizes this utility function”.
In general, the word “utilitarian” has been effectively re-defined in common speech as something like, “ruthlessly efficient to the point of extreme ugliness”, so utilitarianism gets the horns effect from that.
That’s not how the term “utilitarianism” is used in philosophy. The utility function has to be agent neutral. So a utility function where your welfare counts 10x as much as everyone else’s wouldn’t be utilitarian.