What buybuy said. Plus… Moralps are possibly hypocritical, but it could be that they are just wrong, claiming one preference but acting as if they have another. If I claim that I would never prefer a child to die so that I can buy a new car, and I then buy a new car instead of sending my money to feed starving children in wherever, then I am effectively making incorrect statements about my preferences, OR I am using the word preferences in a way that renders it uninteresting. Preferences are worth talking about precisely because to the extent that they describe what people will actually do.
I suspect in the case of starving children and cars, my ACTUAL preference is much more sentimental and much less universal. If I came home one day and laying on my lawn was a starving child, I would very likely feed that child even if this food came from a store I was keeping to trade for a new car. But if this child is around the corner and out of my sight, then its Tesla S time!
So Moralps are possibly hypocritical, but certainly wrong at describing their own preferences, IF we insist that preferences are things that dictate our volition.
Utilitarianism talks about which actions are more moral. It doesn’t talk about which actions a person actually “prefers.” I think its more moral to donate 300 dollars to charity than to take myself and two friends out for a Holiday diner. Yet I have reservations for Dec 28th. The fact I am actually spending the money on my friends and myself doesn’t mean I think this is the most moral things I could be doing.
I have never claimed people are required to optimize their actions in the pursuit of improving the world. So why would it be hypocritical for me not to try to maximize world utility.
So you are saying: “the right thing to do is donate $300 to charity but I don’t see why I should do that just because I think it is the right thing to do.”
Well once we start talking about the right thing to do without attaching any sense of obligation to doing that thing, I’d like to know what is the point about talking about morality at all. It seems it just becomes another way to say “yay donating $300!” and has no more meaning than that.
What I thought were the accepted definitions of the words, saying the moral thing to do is to donate $300 was the same as saying I ought to donate $300. In this definition, discussions of what was moral and what was not really carried more weight than just saying “yay donating $300!”
I didn’t say it was “the right thing” to do. I said it was was moral then what I am actually planning to do. You seem to just be assuming people are required to act in the way they find most moral. I don’t think this is a reasonable thing to ask of people.
Utilitarian conclusions clearly contain more info than “yay X.” Since they typically allow one to compare different positive options as to which is more positive. In addition in many contexts utilitarianism gives you a framework for debating what to do. Many people will agree the primary goal of laws in the USA should be to maximize utility for US citizens/residents as long as the law won’t dramatically harm non-residents (some libertarians disagree but I am just making a claim on what people think). Under these conditions utilitarianism tells you what to do.
Utilitarianism does not tell you how to act in daily life. Since its unclear how much you should weigh the morality of an action against other concerns.
A moral theory that doesn’t tell you how to act in daily life seems incomplete, at least in comparison to e.g. deontological approaches. If one defines a moral framework as something that does tell you how to act in daily life, as I suspect many of the people you’re thinking of do, then to the extent that utilitarianism is a moral framework, it requires extreme self-sacrifice (because the only, or at least most obvious, way to interpret utilitarianism as something that does tell you how to act in daily life is to interpret it as saying that you are required to act in the way that maximizes utility).
So on some level it’s just an argument about definitions, but there is a real point: either utilitarianism requires this extreme self-sacrifice, or it is something substantially less useful in daily life than deontology or virtue ethics.
Preferences of this sort might be interesting not because they describe what their holders will do themselves, but because they describe what their holders will try to get other people to do. I might think that diverting funds from luxury purchases to starving Africans is always morally good but not care enough (or not have enough moral backbone, or whatever) to divert much of my own money that way—but I might e.g. consistently vote for politicians who do, or choose friends who do, or argue for doing it, or something.
Nope. Real human beings are hypocrites, to some extent, pretty much all the time.
But holding a moral value and being hypocritical about it is different from not holding it at all, so I don’t think it’s correct to say that moral values held hypocritically are uninteresting or meaningless or anything like that.
What buybuy said. Plus… Moralps are possibly hypocritical, but it could be that they are just wrong, claiming one preference but acting as if they have another. If I claim that I would never prefer a child to die so that I can buy a new car, and I then buy a new car instead of sending my money to feed starving children in wherever, then I am effectively making incorrect statements about my preferences, OR I am using the word preferences in a way that renders it uninteresting. Preferences are worth talking about precisely because to the extent that they describe what people will actually do.
I suspect in the case of starving children and cars, my ACTUAL preference is much more sentimental and much less universal. If I came home one day and laying on my lawn was a starving child, I would very likely feed that child even if this food came from a store I was keeping to trade for a new car. But if this child is around the corner and out of my sight, then its Tesla S time!
So Moralps are possibly hypocritical, but certainly wrong at describing their own preferences, IF we insist that preferences are things that dictate our volition.
Utilitarianism talks about which actions are more moral. It doesn’t talk about which actions a person actually “prefers.” I think its more moral to donate 300 dollars to charity than to take myself and two friends out for a Holiday diner. Yet I have reservations for Dec 28th. The fact I am actually spending the money on my friends and myself doesn’t mean I think this is the most moral things I could be doing.
I have never claimed people are required to optimize their actions in the pursuit of improving the world. So why would it be hypocritical for me not to try to maximize world utility.
So you are saying: “the right thing to do is donate $300 to charity but I don’t see why I should do that just because I think it is the right thing to do.”
Well once we start talking about the right thing to do without attaching any sense of obligation to doing that thing, I’d like to know what is the point about talking about morality at all. It seems it just becomes another way to say “yay donating $300!” and has no more meaning than that.
What I thought were the accepted definitions of the words, saying the moral thing to do is to donate $300 was the same as saying I ought to donate $300. In this definition, discussions of what was moral and what was not really carried more weight than just saying “yay donating $300!”
I didn’t say it was “the right thing” to do. I said it was was moral then what I am actually planning to do. You seem to just be assuming people are required to act in the way they find most moral. I don’t think this is a reasonable thing to ask of people.
Utilitarian conclusions clearly contain more info than “yay X.” Since they typically allow one to compare different positive options as to which is more positive. In addition in many contexts utilitarianism gives you a framework for debating what to do. Many people will agree the primary goal of laws in the USA should be to maximize utility for US citizens/residents as long as the law won’t dramatically harm non-residents (some libertarians disagree but I am just making a claim on what people think). Under these conditions utilitarianism tells you what to do.
Utilitarianism does not tell you how to act in daily life. Since its unclear how much you should weigh the morality of an action against other concerns.
A moral theory that doesn’t tell you how to act in daily life seems incomplete, at least in comparison to e.g. deontological approaches. If one defines a moral framework as something that does tell you how to act in daily life, as I suspect many of the people you’re thinking of do, then to the extent that utilitarianism is a moral framework, it requires extreme self-sacrifice (because the only, or at least most obvious, way to interpret utilitarianism as something that does tell you how to act in daily life is to interpret it as saying that you are required to act in the way that maximizes utility).
So on some level it’s just an argument about definitions, but there is a real point: either utilitarianism requires this extreme self-sacrifice, or it is something substantially less useful in daily life than deontology or virtue ethics.
Preferences of this sort might be interesting not because they describe what their holders will do themselves, but because they describe what their holders will try to get other people to do. I might think that diverting funds from luxury purchases to starving Africans is always morally good but not care enough (or not have enough moral backbone, or whatever) to divert much of my own money that way—but I might e.g. consistently vote for politicians who do, or choose friends who do, or argue for doing it, or something.
Your comment reads to me like a perfect description of hypocrisy. Am I missing something?
Nope. Real human beings are hypocrites, to some extent, pretty much all the time.
But holding a moral value and being hypocritical about it is different from not holding it at all, so I don’t think it’s correct to say that moral values held hypocritically are uninteresting or meaningless or anything like that.