Cowan: But doesn’t preference utilitarianism itself require some means of aggregation? The means we use for weighing different clashing preferences, can require some kind of value judgments above and beyond Utilitarianism?
Singer: I don’t quite see why that should be so. While acknowledging the practical differences of actually weighing up and calculating all the preferences, I fail to see why it involves other values apart from the preferences themselves.
This is very similar to a question I asked in response to this article by Julia Galef. You can find my comment here as well as several (unsuccessful, IMO) attempts to answer it. This worries me somewhat, because many Less Wrongers affirm utilitarianism without even so much as addressing a huge gaping whole at the very core of its logic. It seems to me that utilitarianism hasn’t been paying rent for quite some time, but there are no signs that it is about to be evicted.
You’re saying that Utilitarianism is fatally flawed because there’s no “method (or even a good reason to believe there is such a method) for interpersonal utility comparison”, right?
Utilitarians try to maximize some quantity across all people, generally either the sum or average of either happiness or satisfied preferences. These can’t be directly measured, so we estimate them as well as we can. For example, to figure out how unhappy having back pain is making someone, you could ask them what probability of success an operation that would cure their back pain (or kill them if it failed) would need to have before they would take it. Questions like this tell us that nearly everyone has the same basic preferences or enjoys the same basic things: really strong preferences for or happiness from getting minimal food, shelter, medical care, etc. Unless we have some reason to believe otherwise we should just add these up across people equally, assuming that having way too little food is as bad for me as it is for you.
Utilitarianism is a value system. It doesn’t pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.
For a Total Utilitarian it’s not a problem to be missing a zero point (unless you’re talking about adding/removing people).
For an Average Utilitarian, or a Total Utilitarian considering birth or death, you try to identify the point at which a life is not worth living. You estimate as well as you can.
Doesn’t “multiplication by a constant” mean births and deaths? Which puts you in my second paragraph: you try to figure out at what point it would be better to never have lived at all. The point at which a life is a net negative is not very clear, and many Utilitarians disagree on where it is. I agree that this is a “big problem”, though I think I would prefer the phrasing “open question”.
Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can’t even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person’s death is as good as another’s, and one person’s outcome O is as good as another’s, ceteris paribus, independently of their preferences.
Utilitarianism is a value system. It doesn’t pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.
I will grant this assumption for the sake of argument. Utilitarianism doesn’t have a truth-value or it does have a truth-value, but is only true for those people who prefer it. Why should I prefer utilitarianism? It seems to have several properties that make it look not very appealing compared to other ethical theories (or “value systems”).
For example, utilitarianism requires knowing lots of social science and being able to perform very computationally expensive calculations. Alternatively, the Decalogue only requires that you memorise a small list of rules and have the ability to judge when a violation of the rules has occurred (and our minds are already much better optimised for this kind of judgement relative to utility calculations because of our evolutionary history). Also, from my perspective, the Decalogue is preferable for the reason that it is much easier to meet its standard (it actually isn’t that hard not to murder people or steal from them and take a break once a week) which is much more psychologically appealing then beating yourself up for going to see a movie instead of donating your kidney to a starving child in Africa.
So, why should I adopt utilitarianism rather than God’s Commandments, egoism, the Categorical Imperative, or any other ethical theory that I happen to fancy?
Wait, are you really claiming we should choose a moral system based on simplicity alone? And that a system of judging how to treat other people that “requires knowing lots of social science” is too complicated? I’d distrust any way of judging how to treat people that didn’t require social science. As for calculations, I agree that we don’t have very good ways to quantify other people’s happiness and suffering (or even our own), but our best guess is better than throwing all the data out and going with arbitrary rules like commandments.
The categorical imperative is nice if you get to make the rules for everyone, but none of us do. Utilitarianism appeals to me because I believe I have worth and other people have worth, and I should do things that take that into account.
Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don’t fully understand.
Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don’t fully understand.
How would you know whether utilitarianism is telling you to do the right thing or not? What experiment would you run? On Less Wrong these are supposed to be basic questions you may ask of any belief. Why is it okay to place utilitarianism in a non-overlapping magisteria (NOMA), but not, say, religion?
I am simply pointing out that utilitarianism doesn’t meet Less Wrong’s epistemic standards and that if utilitarianism is mere personal preference your arguments are no more persuasive to me than a chocolate-eater’s would be to a vanilla-eater (except, in this case, chocolate (utilitarianism) is more expensive than vanilla (10 Commandments)).
Also, the Decalogue is not an arbitrary set of rules. We have quite good evidence that it is adaptive in many different environments.
Sorry, I was going in the wrong direction. You’re right that utilitarianism isn’t a tool, but a descriptor of what I value.
I care about both my wellbeing and my husband’s wellbeing. No moral system spells out how to balance these things—the Decalogue merely forbids killing him or cheating on him, but doesn’t address whether it’s permissible to turn on the light while he’s trying to sleep or if I should dress in the dark instead. Should I say, “balancing multiple people’s needs is too computationally costly” and give up on the whole project?
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don’t have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Picking a simpler system might minimize thought required on my part, but it wouldn’t maximize what I want to maximize.
Sorry, I was going in the wrong direction. You’re right that utilitarianism isn’t a tool, but a descriptor of what I value.
So, utilitarianism isn’t true, it is a matter of taste (preferences, values, etc...)? I’m fine with that. The problem I see here is this: I, nor anyone I have ever met, actually has preferences that are isomorphic to utilitarianism (I am not including you, because I do not believe you when you say that utilitarianism describes your value system; I will explain why below).
I care about both my wellbeing and my husband’s wellbeing. No moral system spells out how to balance these things—the Decalogue merely forbids killing him or cheating on him, but doesn’t address whether it’s permissible to turn on the light while he’s trying to sleep or if I should dress in the dark instead. Should I say, “balancing multiple people’s needs is too computationally costly” and give up on the whole project?
This is not a reason to adopt utilitarianism relative to alternative moral theories. Why? Because utilitarianism is not required in order to balance some people’s interest against others’. Altruism does not require weighing everyone in your preference function equally, but utilitarianism does. Even egoists (typically) have friends that they care about. The motto of utilitarianism is “the greatest good for the greatest number”, not “the greatest good for me and the people I care most about”. If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don’t have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person’s preferences (namely, you).
In summary, I find utilitarianism as proposition and utilitarianism as value system very unpersuasive. As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so. Propositions of this kind (meaningless or metaphysical propositions) don’t ordinarily warrant wasting much time thinking about them. As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
So, utilitarianism isn’t true, it is a matter of taste
I don’t understand how “true” applies to a matter of taste any more than a taste for chocolate is “truer” than any other.
utilitarianism is not required in order to balance some people’s interest against others’.
There are others, but this is the one that seems best to me.
If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry
This is the type of decision we found maddening, which is why we currently have firm charity and non-charity budgets. Before that system I did spend money on non-necessities, and I felt terrible about it. So you’re correct that I have other preferences besides utilitarianism.
I don’t think it’s fair or accurate to say “If you ever spent any resources on anything other than what you say you prefer, it’s not really your preference.” I believe people can prefer multiple things at once. I value the greatest good for the greatest number, and if I could redesign myself as a perfect person, I would always act on that preference. But as a mammal, yes, I also have a drive to care for me and mine more than strangers. When I’ve tried to supress that entirely, I was very unhappy.
I think a pragmatic utilitarian takes into account the fact that we are mammals, and that at some point we’ll probably break down if we don’t satisfy our other preferences a little. I try to balance it at a point where I can sustain what I’m doing for the rest of my life.
I came late to this whole philosophy thing, so it took me a while to find out “utilitarianism” is what people called what I was trying to do. The name isn’t really important to me, so it may be that I’ve been using it wrong or we have different definitions of what counts as real utilitarianism.
So, utilitarianism isn’t true, it is a matter of taste (preferences, values, etc...)?
Saying utilitarianism isn’t true because some people aren’t automatically motivated to follow it is like saying that grass isn’t green because some people wish it was purple. If you don’t want to follow utilitarian ethics that doesn’t mean they aren’t true. It just means that you’re not nearly as good a person as someone who does. If you genuinely want to be a bad person then nothing can change your mind, but most human beings place at least some value on morality.
You’re confusing moral truth with motivational internalism. Motivational internalism states that moral knowledge is intrinsically motivating, simply knowing something is good and right motivates a rational entity to do it. That’s obviously false.
Its opposite is motivational externalism, which states that we are motivated to act morally by our moral emotions (i. e. sympathy, compassion) and willpower. Motivational externalism seems obviously correct to me. That in turn indicates that people will often act immorally if their willpower, compassion, and other moral emotions are depleted, even if they know intellectually that their behavior is less moral than it could be.
If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
There is a vast, vast amount of writing at Less Wrong on the fact that people’s behavior and their values often fail to coincide. Have you never read anything on the topic of “akrasia?” Revealed preference is moderately informative in regards to people’s values, but it is nowhere near 100% reliable. If someone talks about how utilitarianism is correct, but often fails to act in utilitarian ways, it is highly likely they are suffering from akrasia and lack the willpower to act on their values.
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person’s preferences (namely, you).
You don’t seem to understand the difference between categorical and incremental preferences. If juliawise spends 50% of her time doing selfish stuff and 50% of her time doing utilitarian stuff that doesn’t mean she has no preference for utilitarianism. That would be like saying that I don’t have a preference for pizza because I sometimes eat pizza and sometimes eat tacos.
Furthermore, I expect that if juliawise was given a magic drug that completely removed her akrasia she would behave in a much more utilitarian fashion.
As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so.
If utilitarianism was true we could expect to see a correlation between willpower and morally positive behavior. This appears to be true, in fact such behaviors are lumped together into the trait “conscientiousness” because they are correlated.
If utilitarianism was true then deontological rule systems would be vulnerable to dutch-booking, while utilitarianism would not be. This appears to be true.
If utilitarianism was true then it would be unfair to for multiple people to have different utility levels, all else being equal. This is practically tautological.
If utilitarianism was true then goodness would consist primarily of doing things that benefit yourself and others. Again, this is practically tautological.
Now, these pieces of evidence don’t necessarily point to utilitarianism, other types of consequentialist theories might also explain them. But they are informative.
As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
Again, ethical systems are not intrinsically motivating. If you don’t want to follow utilitarianism then that doesn’t mean it’s not true, it just means that you’re a person who sometimes treats other people unfairly and badly. Again, if that doesn’t bother you then there are no universally compelling arguments. But if you’re a reasonably normal human it might bother you a little and make you want to find a consistent system to guide you in your attempts to behave better. Like utilitarianism.
What alternative to utilitarianism are you proposing? Avoiding taking into account multiple people’s welfare? Even a perfect egoist still needs to weigh the welfare of different possible future selves. If you zoom in enough, arbitrariness is everywhere, but “arbitrariness is everywhere, arbitrariness, arbitrariness!” is not a policy. To the extent that our “true” preferences about how to compare welfare have structure, we can try to capture that structure in principles; to the extent that they don’t have structure, picking arbitrary principles isn’t worse than picking arbitrary actions.
Your preferences tell you how to aggregate the preferences of everyone else.
Edit: This post was downvoted to −1 when I came to it, so I thought I’d clarify. It’s since been voted back up to 0, but I just finished writing the clarification, so...
Your preferences are all that you care about (by definition). So you only care about the preferences of others to the extent that their preferences are a component of your own preferences. Now if you claim preference utilitarianism is true, you could be making one of two distinct claims:
“My preferences state that I should maximize the, suitably aggregated, preferences of all people/relevant agents,”
or
“The preferences of each human state that they should maximize the, suitably aggregated, preference of all people/relevant agents.”
In both cases, some “suitable aggregation” has to be chosen and which agents are relevant has to be chosen. The latter is actually a sub-problem of the former: set weights of zero for non-relevant agents in the aggregation. So how does the utilitarian aggregate? Well, that depends on what the utilitarian cares about, quite literally. What does the utilitarian’s preferences say? Maximize average utility? Total utility? Ultimately what the utilitarian should be maximizing comes back to her own preferences (or the collective preferences of humanity if the utilitarian is making the claim that our preferences are all the same). Going back to the utilitarian’s own utility function also (potentially) deals with things like utility monsters, how to deal with the preferences of the dead and the potentially-alive and so forth.
This is very similar to a question I asked in response to this article by Julia Galef. You can find my comment here as well as several (unsuccessful, IMO) attempts to answer it. This worries me somewhat, because many Less Wrongers affirm utilitarianism without even so much as addressing a huge gaping whole at the very core of its logic. It seems to me that utilitarianism hasn’t been paying rent for quite some time, but there are no signs that it is about to be evicted.
You’re saying that Utilitarianism is fatally flawed because there’s no “method (or even a good reason to believe there is such a method) for interpersonal utility comparison”, right?
Utilitarians try to maximize some quantity across all people, generally either the sum or average of either happiness or satisfied preferences. These can’t be directly measured, so we estimate them as well as we can. For example, to figure out how unhappy having back pain is making someone, you could ask them what probability of success an operation that would cure their back pain (or kill them if it failed) would need to have before they would take it. Questions like this tell us that nearly everyone has the same basic preferences or enjoys the same basic things: really strong preferences for or happiness from getting minimal food, shelter, medical care, etc. Unless we have some reason to believe otherwise we should just add these up across people equally, assuming that having way too little food is as bad for me as it is for you.
Utilitarianism is a value system. It doesn’t pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.
We can’t “just add these [preferences] up across people equally” because utility functions are only defined up to an affine transformation.
You might be able to “just add up” pleasure, on the other hand, though you are then vulnerable to utility monsters, etc.
For a Total Utilitarian it’s not a problem to be missing a zero point (unless you’re talking about adding/removing people).
For an Average Utilitarian, or a Total Utilitarian considering birth or death, you try to identify the point at which a life is not worth living. You estimate as well as you can.
Multiplication by a constant is an affine transformation. This clearly is a very big problem.
But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.
Doesn’t “multiplication by a constant” mean births and deaths? Which puts you in my second paragraph: you try to figure out at what point it would be better to never have lived at all. The point at which a life is a net negative is not very clear, and many Utilitarians disagree on where it is. I agree that this is a “big problem”, though I think I would prefer the phrasing “open question”.
Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can’t even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person’s death is as good as another’s, and one person’s outcome O is as good as another’s, ceteris paribus, independently of their preferences.
Yes, thank you for taking the time to explain.
I will grant this assumption for the sake of argument. Utilitarianism doesn’t have a truth-value or it does have a truth-value, but is only true for those people who prefer it. Why should I prefer utilitarianism? It seems to have several properties that make it look not very appealing compared to other ethical theories (or “value systems”).
For example, utilitarianism requires knowing lots of social science and being able to perform very computationally expensive calculations. Alternatively, the Decalogue only requires that you memorise a small list of rules and have the ability to judge when a violation of the rules has occurred (and our minds are already much better optimised for this kind of judgement relative to utility calculations because of our evolutionary history). Also, from my perspective, the Decalogue is preferable for the reason that it is much easier to meet its standard (it actually isn’t that hard not to murder people or steal from them and take a break once a week) which is much more psychologically appealing then beating yourself up for going to see a movie instead of donating your kidney to a starving child in Africa.
So, why should I adopt utilitarianism rather than God’s Commandments, egoism, the Categorical Imperative, or any other ethical theory that I happen to fancy?
Wait, are you really claiming we should choose a moral system based on simplicity alone? And that a system of judging how to treat other people that “requires knowing lots of social science” is too complicated? I’d distrust any way of judging how to treat people that didn’t require social science. As for calculations, I agree that we don’t have very good ways to quantify other people’s happiness and suffering (or even our own), but our best guess is better than throwing all the data out and going with arbitrary rules like commandments.
The categorical imperative is nice if you get to make the rules for everyone, but none of us do. Utilitarianism appeals to me because I believe I have worth and other people have worth, and I should do things that take that into account.
Jayson’s point is that a moral system so complicated that you can’t figure out whether a given action is moral isn’t very useful.
Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don’t fully understand.
How would you know whether utilitarianism is telling you to do the right thing or not? What experiment would you run? On Less Wrong these are supposed to be basic questions you may ask of any belief. Why is it okay to place utilitarianism in a non-overlapping magisteria (NOMA), but not, say, religion?
I am simply pointing out that utilitarianism doesn’t meet Less Wrong’s epistemic standards and that if utilitarianism is mere personal preference your arguments are no more persuasive to me than a chocolate-eater’s would be to a vanilla-eater (except, in this case, chocolate (utilitarianism) is more expensive than vanilla (10 Commandments)).
Also, the Decalogue is not an arbitrary set of rules. We have quite good evidence that it is adaptive in many different environments.
Sorry, I was going in the wrong direction. You’re right that utilitarianism isn’t a tool, but a descriptor of what I value.
I care about both my wellbeing and my husband’s wellbeing. No moral system spells out how to balance these things—the Decalogue merely forbids killing him or cheating on him, but doesn’t address whether it’s permissible to turn on the light while he’s trying to sleep or if I should dress in the dark instead. Should I say, “balancing multiple people’s needs is too computationally costly” and give up on the whole project?
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don’t have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Picking a simpler system might minimize thought required on my part, but it wouldn’t maximize what I want to maximize.
So, utilitarianism isn’t true, it is a matter of taste (preferences, values, etc...)? I’m fine with that. The problem I see here is this: I, nor anyone I have ever met, actually has preferences that are isomorphic to utilitarianism (I am not including you, because I do not believe you when you say that utilitarianism describes your value system; I will explain why below).
This is not a reason to adopt utilitarianism relative to alternative moral theories. Why? Because utilitarianism is not required in order to balance some people’s interest against others’. Altruism does not require weighing everyone in your preference function equally, but utilitarianism does. Even egoists (typically) have friends that they care about. The motto of utilitarianism is “the greatest good for the greatest number”, not “the greatest good for me and the people I care most about”. If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person’s preferences (namely, you).
In summary, I find utilitarianism as proposition and utilitarianism as value system very unpersuasive. As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so. Propositions of this kind (meaningless or metaphysical propositions) don’t ordinarily warrant wasting much time thinking about them. As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
I don’t understand how “true” applies to a matter of taste any more than a taste for chocolate is “truer” than any other.
There are others, but this is the one that seems best to me.
This is the type of decision we found maddening, which is why we currently have firm charity and non-charity budgets. Before that system I did spend money on non-necessities, and I felt terrible about it. So you’re correct that I have other preferences besides utilitarianism.
I don’t think it’s fair or accurate to say “If you ever spent any resources on anything other than what you say you prefer, it’s not really your preference.” I believe people can prefer multiple things at once. I value the greatest good for the greatest number, and if I could redesign myself as a perfect person, I would always act on that preference. But as a mammal, yes, I also have a drive to care for me and mine more than strangers. When I’ve tried to supress that entirely, I was very unhappy.
I think a pragmatic utilitarian takes into account the fact that we are mammals, and that at some point we’ll probably break down if we don’t satisfy our other preferences a little. I try to balance it at a point where I can sustain what I’m doing for the rest of my life.
I came late to this whole philosophy thing, so it took me a while to find out “utilitarianism” is what people called what I was trying to do. The name isn’t really important to me, so it may be that I’ve been using it wrong or we have different definitions of what counts as real utilitarianism.
Saying utilitarianism isn’t true because some people aren’t automatically motivated to follow it is like saying that grass isn’t green because some people wish it was purple. If you don’t want to follow utilitarian ethics that doesn’t mean they aren’t true. It just means that you’re not nearly as good a person as someone who does. If you genuinely want to be a bad person then nothing can change your mind, but most human beings place at least some value on morality.
You’re confusing moral truth with motivational internalism. Motivational internalism states that moral knowledge is intrinsically motivating, simply knowing something is good and right motivates a rational entity to do it. That’s obviously false.
Its opposite is motivational externalism, which states that we are motivated to act morally by our moral emotions (i. e. sympathy, compassion) and willpower. Motivational externalism seems obviously correct to me. That in turn indicates that people will often act immorally if their willpower, compassion, and other moral emotions are depleted, even if they know intellectually that their behavior is less moral than it could be.
There is a vast, vast amount of writing at Less Wrong on the fact that people’s behavior and their values often fail to coincide. Have you never read anything on the topic of “akrasia?” Revealed preference is moderately informative in regards to people’s values, but it is nowhere near 100% reliable. If someone talks about how utilitarianism is correct, but often fails to act in utilitarian ways, it is highly likely they are suffering from akrasia and lack the willpower to act on their values.
You don’t seem to understand the difference between categorical and incremental preferences. If juliawise spends 50% of her time doing selfish stuff and 50% of her time doing utilitarian stuff that doesn’t mean she has no preference for utilitarianism. That would be like saying that I don’t have a preference for pizza because I sometimes eat pizza and sometimes eat tacos.
Furthermore, I expect that if juliawise was given a magic drug that completely removed her akrasia she would behave in a much more utilitarian fashion.
If utilitarianism was true we could expect to see a correlation between willpower and morally positive behavior. This appears to be true, in fact such behaviors are lumped together into the trait “conscientiousness” because they are correlated.
If utilitarianism was true then deontological rule systems would be vulnerable to dutch-booking, while utilitarianism would not be. This appears to be true.
If utilitarianism was true then it would be unfair to for multiple people to have different utility levels, all else being equal. This is practically tautological.
If utilitarianism was true then goodness would consist primarily of doing things that benefit yourself and others. Again, this is practically tautological.
Now, these pieces of evidence don’t necessarily point to utilitarianism, other types of consequentialist theories might also explain them. But they are informative.
Again, ethical systems are not intrinsically motivating. If you don’t want to follow utilitarianism then that doesn’t mean it’s not true, it just means that you’re a person who sometimes treats other people unfairly and badly. Again, if that doesn’t bother you then there are no universally compelling arguments. But if you’re a reasonably normal human it might bother you a little and make you want to find a consistent system to guide you in your attempts to behave better. Like utilitarianism.
What alternative to utilitarianism are you proposing? Avoiding taking into account multiple people’s welfare? Even a perfect egoist still needs to weigh the welfare of different possible future selves. If you zoom in enough, arbitrariness is everywhere, but “arbitrariness is everywhere, arbitrariness, arbitrariness!” is not a policy. To the extent that our “true” preferences about how to compare welfare have structure, we can try to capture that structure in principles; to the extent that they don’t have structure, picking arbitrary principles isn’t worse than picking arbitrary actions.
Your preferences tell you how to aggregate the preferences of everyone else.
Edit: This post was downvoted to −1 when I came to it, so I thought I’d clarify. It’s since been voted back up to 0, but I just finished writing the clarification, so...
Your preferences are all that you care about (by definition). So you only care about the preferences of others to the extent that their preferences are a component of your own preferences. Now if you claim preference utilitarianism is true, you could be making one of two distinct claims:
“My preferences state that I should maximize the, suitably aggregated, preferences of all people/relevant agents,” or
“The preferences of each human state that they should maximize the, suitably aggregated, preference of all people/relevant agents.”
In both cases, some “suitable aggregation” has to be chosen and which agents are relevant has to be chosen. The latter is actually a sub-problem of the former: set weights of zero for non-relevant agents in the aggregation. So how does the utilitarian aggregate? Well, that depends on what the utilitarian cares about, quite literally. What does the utilitarian’s preferences say? Maximize average utility? Total utility? Ultimately what the utilitarian should be maximizing comes back to her own preferences (or the collective preferences of humanity if the utilitarian is making the claim that our preferences are all the same). Going back to the utilitarian’s own utility function also (potentially) deals with things like utility monsters, how to deal with the preferences of the dead and the potentially-alive and so forth.
If my preferences are such that only what happens to me matters, I don’t think you can call me a “preference Utilitarian”.
Right, your preferences tell you whether you’re a utilitarian or not in the first place.