Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don’t fully understand.
How would you know whether utilitarianism is telling you to do the right thing or not? What experiment would you run? On Less Wrong these are supposed to be basic questions you may ask of any belief. Why is it okay to place utilitarianism in a non-overlapping magisteria (NOMA), but not, say, religion?
I am simply pointing out that utilitarianism doesn’t meet Less Wrong’s epistemic standards and that if utilitarianism is mere personal preference your arguments are no more persuasive to me than a chocolate-eater’s would be to a vanilla-eater (except, in this case, chocolate (utilitarianism) is more expensive than vanilla (10 Commandments)).
Also, the Decalogue is not an arbitrary set of rules. We have quite good evidence that it is adaptive in many different environments.
Sorry, I was going in the wrong direction. You’re right that utilitarianism isn’t a tool, but a descriptor of what I value.
I care about both my wellbeing and my husband’s wellbeing. No moral system spells out how to balance these things—the Decalogue merely forbids killing him or cheating on him, but doesn’t address whether it’s permissible to turn on the light while he’s trying to sleep or if I should dress in the dark instead. Should I say, “balancing multiple people’s needs is too computationally costly” and give up on the whole project?
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don’t have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Picking a simpler system might minimize thought required on my part, but it wouldn’t maximize what I want to maximize.
Sorry, I was going in the wrong direction. You’re right that utilitarianism isn’t a tool, but a descriptor of what I value.
So, utilitarianism isn’t true, it is a matter of taste (preferences, values, etc...)? I’m fine with that. The problem I see here is this: I, nor anyone I have ever met, actually has preferences that are isomorphic to utilitarianism (I am not including you, because I do not believe you when you say that utilitarianism describes your value system; I will explain why below).
I care about both my wellbeing and my husband’s wellbeing. No moral system spells out how to balance these things—the Decalogue merely forbids killing him or cheating on him, but doesn’t address whether it’s permissible to turn on the light while he’s trying to sleep or if I should dress in the dark instead. Should I say, “balancing multiple people’s needs is too computationally costly” and give up on the whole project?
This is not a reason to adopt utilitarianism relative to alternative moral theories. Why? Because utilitarianism is not required in order to balance some people’s interest against others’. Altruism does not require weighing everyone in your preference function equally, but utilitarianism does. Even egoists (typically) have friends that they care about. The motto of utilitarianism is “the greatest good for the greatest number”, not “the greatest good for me and the people I care most about”. If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don’t have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person’s preferences (namely, you).
In summary, I find utilitarianism as proposition and utilitarianism as value system very unpersuasive. As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so. Propositions of this kind (meaningless or metaphysical propositions) don’t ordinarily warrant wasting much time thinking about them. As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
So, utilitarianism isn’t true, it is a matter of taste
I don’t understand how “true” applies to a matter of taste any more than a taste for chocolate is “truer” than any other.
utilitarianism is not required in order to balance some people’s interest against others’.
There are others, but this is the one that seems best to me.
If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry
This is the type of decision we found maddening, which is why we currently have firm charity and non-charity budgets. Before that system I did spend money on non-necessities, and I felt terrible about it. So you’re correct that I have other preferences besides utilitarianism.
I don’t think it’s fair or accurate to say “If you ever spent any resources on anything other than what you say you prefer, it’s not really your preference.” I believe people can prefer multiple things at once. I value the greatest good for the greatest number, and if I could redesign myself as a perfect person, I would always act on that preference. But as a mammal, yes, I also have a drive to care for me and mine more than strangers. When I’ve tried to supress that entirely, I was very unhappy.
I think a pragmatic utilitarian takes into account the fact that we are mammals, and that at some point we’ll probably break down if we don’t satisfy our other preferences a little. I try to balance it at a point where I can sustain what I’m doing for the rest of my life.
I came late to this whole philosophy thing, so it took me a while to find out “utilitarianism” is what people called what I was trying to do. The name isn’t really important to me, so it may be that I’ve been using it wrong or we have different definitions of what counts as real utilitarianism.
So, utilitarianism isn’t true, it is a matter of taste (preferences, values, etc...)?
Saying utilitarianism isn’t true because some people aren’t automatically motivated to follow it is like saying that grass isn’t green because some people wish it was purple. If you don’t want to follow utilitarian ethics that doesn’t mean they aren’t true. It just means that you’re not nearly as good a person as someone who does. If you genuinely want to be a bad person then nothing can change your mind, but most human beings place at least some value on morality.
You’re confusing moral truth with motivational internalism. Motivational internalism states that moral knowledge is intrinsically motivating, simply knowing something is good and right motivates a rational entity to do it. That’s obviously false.
Its opposite is motivational externalism, which states that we are motivated to act morally by our moral emotions (i. e. sympathy, compassion) and willpower. Motivational externalism seems obviously correct to me. That in turn indicates that people will often act immorally if their willpower, compassion, and other moral emotions are depleted, even if they know intellectually that their behavior is less moral than it could be.
If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
There is a vast, vast amount of writing at Less Wrong on the fact that people’s behavior and their values often fail to coincide. Have you never read anything on the topic of “akrasia?” Revealed preference is moderately informative in regards to people’s values, but it is nowhere near 100% reliable. If someone talks about how utilitarianism is correct, but often fails to act in utilitarian ways, it is highly likely they are suffering from akrasia and lack the willpower to act on their values.
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person’s preferences (namely, you).
You don’t seem to understand the difference between categorical and incremental preferences. If juliawise spends 50% of her time doing selfish stuff and 50% of her time doing utilitarian stuff that doesn’t mean she has no preference for utilitarianism. That would be like saying that I don’t have a preference for pizza because I sometimes eat pizza and sometimes eat tacos.
Furthermore, I expect that if juliawise was given a magic drug that completely removed her akrasia she would behave in a much more utilitarian fashion.
As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so.
If utilitarianism was true we could expect to see a correlation between willpower and morally positive behavior. This appears to be true, in fact such behaviors are lumped together into the trait “conscientiousness” because they are correlated.
If utilitarianism was true then deontological rule systems would be vulnerable to dutch-booking, while utilitarianism would not be. This appears to be true.
If utilitarianism was true then it would be unfair to for multiple people to have different utility levels, all else being equal. This is practically tautological.
If utilitarianism was true then goodness would consist primarily of doing things that benefit yourself and others. Again, this is practically tautological.
Now, these pieces of evidence don’t necessarily point to utilitarianism, other types of consequentialist theories might also explain them. But they are informative.
As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
Again, ethical systems are not intrinsically motivating. If you don’t want to follow utilitarianism then that doesn’t mean it’s not true, it just means that you’re a person who sometimes treats other people unfairly and badly. Again, if that doesn’t bother you then there are no universally compelling arguments. But if you’re a reasonably normal human it might bother you a little and make you want to find a consistent system to guide you in your attempts to behave better. Like utilitarianism.
How would you know whether utilitarianism is telling you to do the right thing or not? What experiment would you run? On Less Wrong these are supposed to be basic questions you may ask of any belief. Why is it okay to place utilitarianism in a non-overlapping magisteria (NOMA), but not, say, religion?
I am simply pointing out that utilitarianism doesn’t meet Less Wrong’s epistemic standards and that if utilitarianism is mere personal preference your arguments are no more persuasive to me than a chocolate-eater’s would be to a vanilla-eater (except, in this case, chocolate (utilitarianism) is more expensive than vanilla (10 Commandments)).
Also, the Decalogue is not an arbitrary set of rules. We have quite good evidence that it is adaptive in many different environments.
Sorry, I was going in the wrong direction. You’re right that utilitarianism isn’t a tool, but a descriptor of what I value.
I care about both my wellbeing and my husband’s wellbeing. No moral system spells out how to balance these things—the Decalogue merely forbids killing him or cheating on him, but doesn’t address whether it’s permissible to turn on the light while he’s trying to sleep or if I should dress in the dark instead. Should I say, “balancing multiple people’s needs is too computationally costly” and give up on the whole project?
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don’t have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Picking a simpler system might minimize thought required on my part, but it wouldn’t maximize what I want to maximize.
So, utilitarianism isn’t true, it is a matter of taste (preferences, values, etc...)? I’m fine with that. The problem I see here is this: I, nor anyone I have ever met, actually has preferences that are isomorphic to utilitarianism (I am not including you, because I do not believe you when you say that utilitarianism describes your value system; I will explain why below).
This is not a reason to adopt utilitarianism relative to alternative moral theories. Why? Because utilitarianism is not required in order to balance some people’s interest against others’. Altruism does not require weighing everyone in your preference function equally, but utilitarianism does. Even egoists (typically) have friends that they care about. The motto of utilitarianism is “the greatest good for the greatest number”, not “the greatest good for me and the people I care most about”. If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person’s preferences (namely, you).
In summary, I find utilitarianism as proposition and utilitarianism as value system very unpersuasive. As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so. Propositions of this kind (meaningless or metaphysical propositions) don’t ordinarily warrant wasting much time thinking about them. As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
I don’t understand how “true” applies to a matter of taste any more than a taste for chocolate is “truer” than any other.
There are others, but this is the one that seems best to me.
This is the type of decision we found maddening, which is why we currently have firm charity and non-charity budgets. Before that system I did spend money on non-necessities, and I felt terrible about it. So you’re correct that I have other preferences besides utilitarianism.
I don’t think it’s fair or accurate to say “If you ever spent any resources on anything other than what you say you prefer, it’s not really your preference.” I believe people can prefer multiple things at once. I value the greatest good for the greatest number, and if I could redesign myself as a perfect person, I would always act on that preference. But as a mammal, yes, I also have a drive to care for me and mine more than strangers. When I’ve tried to supress that entirely, I was very unhappy.
I think a pragmatic utilitarian takes into account the fact that we are mammals, and that at some point we’ll probably break down if we don’t satisfy our other preferences a little. I try to balance it at a point where I can sustain what I’m doing for the rest of my life.
I came late to this whole philosophy thing, so it took me a while to find out “utilitarianism” is what people called what I was trying to do. The name isn’t really important to me, so it may be that I’ve been using it wrong or we have different definitions of what counts as real utilitarianism.
Saying utilitarianism isn’t true because some people aren’t automatically motivated to follow it is like saying that grass isn’t green because some people wish it was purple. If you don’t want to follow utilitarian ethics that doesn’t mean they aren’t true. It just means that you’re not nearly as good a person as someone who does. If you genuinely want to be a bad person then nothing can change your mind, but most human beings place at least some value on morality.
You’re confusing moral truth with motivational internalism. Motivational internalism states that moral knowledge is intrinsically motivating, simply knowing something is good and right motivates a rational entity to do it. That’s obviously false.
Its opposite is motivational externalism, which states that we are motivated to act morally by our moral emotions (i. e. sympathy, compassion) and willpower. Motivational externalism seems obviously correct to me. That in turn indicates that people will often act immorally if their willpower, compassion, and other moral emotions are depleted, even if they know intellectually that their behavior is less moral than it could be.
There is a vast, vast amount of writing at Less Wrong on the fact that people’s behavior and their values often fail to coincide. Have you never read anything on the topic of “akrasia?” Revealed preference is moderately informative in regards to people’s values, but it is nowhere near 100% reliable. If someone talks about how utilitarianism is correct, but often fails to act in utilitarian ways, it is highly likely they are suffering from akrasia and lack the willpower to act on their values.
You don’t seem to understand the difference between categorical and incremental preferences. If juliawise spends 50% of her time doing selfish stuff and 50% of her time doing utilitarian stuff that doesn’t mean she has no preference for utilitarianism. That would be like saying that I don’t have a preference for pizza because I sometimes eat pizza and sometimes eat tacos.
Furthermore, I expect that if juliawise was given a magic drug that completely removed her akrasia she would behave in a much more utilitarian fashion.
If utilitarianism was true we could expect to see a correlation between willpower and morally positive behavior. This appears to be true, in fact such behaviors are lumped together into the trait “conscientiousness” because they are correlated.
If utilitarianism was true then deontological rule systems would be vulnerable to dutch-booking, while utilitarianism would not be. This appears to be true.
If utilitarianism was true then it would be unfair to for multiple people to have different utility levels, all else being equal. This is practically tautological.
If utilitarianism was true then goodness would consist primarily of doing things that benefit yourself and others. Again, this is practically tautological.
Now, these pieces of evidence don’t necessarily point to utilitarianism, other types of consequentialist theories might also explain them. But they are informative.
Again, ethical systems are not intrinsically motivating. If you don’t want to follow utilitarianism then that doesn’t mean it’s not true, it just means that you’re a person who sometimes treats other people unfairly and badly. Again, if that doesn’t bother you then there are no universally compelling arguments. But if you’re a reasonably normal human it might bother you a little and make you want to find a consistent system to guide you in your attempts to behave better. Like utilitarianism.