I’m seeing fundamental disagreement on what “moral” means.
In the Anglo Saxon tradition, what is moral is what you should or ought to do, where should and ought both entail a debt one has the obligation to pay. Note that this doesn’t make morality binary; actions are more or less moral depending on how much of the debt you’re paying off. I wouldn’t be surprised if this varied a lot by culture, and I invite people to detail the similarities and differences in other cultures they are familiar with.
What I hear from some people here is Utilitarianism as a preference for certain states of the world, where there is no obligation to do anything—action to bring about those states is optional.
I think in the Anglo Saxon tradition, actions which fulfill preferences but are not obligatory would be considered praiseworthy or benevolent. Perhaps people would call them moral in terms of more than paying off your debt, but failing to “pay extra” would not be considered immoral.
Let’s call people who view morality as what is obligatory as Moralos, and people who view morality as what is preferable as Moralps.
Moralos will view Moralps as unjustly demanding and completely hypocritical—demanding payments on a huge debt, but only making tiny payments, if any, toward those debts themselves. Moralps will view Moralos as pretty much hateful—they don’t even prefer a better world, they want it to be worse.
This looks very familiar to me.
Haidt should really add questions to his poll to get at just what morality means to people, in particular in terms of obligation.
This makes sense… and the idea of ‘praiseworthy/benevolent’ shows that Moralos do have the concept of a full ranking.
So we could look at this as Moralos having a ranking plus an ‘obligation rule’ that tells you how good an outcome you’re obligated to achieve in a given situation, while Moralps don’t accept such a rule and instead just play it by ear.
Justifying an obligation rule seems philosophically tough… unless you justify it as a heuristic, in which case you get to think like a Moralp and act like a Moralo, and abandon your heuristic if it seems like it’s breaking down. Taking Giving What We Can’s 10% pledge is a good example of adopting such a heuristic.
Justifying an obligation rule seems philosophically tough
Maybe, but it’s a very common moral intuition, so anything that purports to be a theory of human morality ought to explain it, or at least explain why we would misperceive that the distinction between obligatory and praiseworthy-but-non-obligatory actions exists.
I don’t see the heuristic value. We don’t perceive people as being binarily e.g. either attractive or unattractive, friendly or unfriendly, reliable or unreliable; even though we often had to make snap judgements about these attributes, on matters of life and death, we still perceive them as being on a sliding scale. Why would moral vs. immoral be different?
It’d be fairer to compare to other properties of actions rather than properties of people; I think moral vs. immoral is also a sliding scale when applied to people.
That said, we do seem more attached to the binary of moral vs. immoral actions than, say, wise vs. unwise. My first guess is that this stems from a desire to orchestrate social responses to immoral action. From this hypothesis I predict that binary views of moral/immoral will be correlated with coordinated social responses to same.
I think moral vs. immoral is also a sliding scale when applied to people.
Interesting; that may be a real difference in our intuitions. My sense is that unless I’m deliberately paying attention I tend to think of people quite binarily as either decent people or bad people.
Significantly more than you think of them binarily regarding those other categories? Then it is a real difference.
My view of people is that there are a few saints and a few cancers, and a big decent majority in between who sometimes fall short of obligations and sometimes exceed them depending on the situation. The ‘saint’ and ‘cancer’ categories are very small.
What do your ‘good’ and ‘bad’ categories look like, and what are their relative sizes?
I think of a large population of “decent”, who generically never do anything outright bad (I realise this is probably inaccurate, I’m talking about intuitions). There’s some variation within that category in terms of how much outright good they do, but that’s a lot less important. And then a smaller but substantial chunk, say 10%, of “bad” people, people who do outright bad things on occasion (and some variation in how frequently they do them, but again that’s much less important).
So we could look at this as Moralos having a ranking plus an ‘obligation rule’
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Justifying an obligation rule seems philosophically tough...
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
I don’t think my concept of obligation is mysterious:
So what is obligation? I think it’s what we call our willingness to coerce/punish
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
So my vision of the utilitarian project is essentially reductionist
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations:
You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
You do not admit any sense in which it would be ‘better’
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
What buybuy said. Plus… Moralps are possibly hypocritical, but it could be that they are just wrong, claiming one preference but acting as if they have another. If I claim that I would never prefer a child to die so that I can buy a new car, and I then buy a new car instead of sending my money to feed starving children in wherever, then I am effectively making incorrect statements about my preferences, OR I am using the word preferences in a way that renders it uninteresting. Preferences are worth talking about precisely because to the extent that they describe what people will actually do.
I suspect in the case of starving children and cars, my ACTUAL preference is much more sentimental and much less universal. If I came home one day and laying on my lawn was a starving child, I would very likely feed that child even if this food came from a store I was keeping to trade for a new car. But if this child is around the corner and out of my sight, then its Tesla S time!
So Moralps are possibly hypocritical, but certainly wrong at describing their own preferences, IF we insist that preferences are things that dictate our volition.
Utilitarianism talks about which actions are more moral. It doesn’t talk about which actions a person actually “prefers.” I think its more moral to donate 300 dollars to charity than to take myself and two friends out for a Holiday diner. Yet I have reservations for Dec 28th. The fact I am actually spending the money on my friends and myself doesn’t mean I think this is the most moral things I could be doing.
I have never claimed people are required to optimize their actions in the pursuit of improving the world. So why would it be hypocritical for me not to try to maximize world utility.
So you are saying: “the right thing to do is donate $300 to charity but I don’t see why I should do that just because I think it is the right thing to do.”
Well once we start talking about the right thing to do without attaching any sense of obligation to doing that thing, I’d like to know what is the point about talking about morality at all. It seems it just becomes another way to say “yay donating $300!” and has no more meaning than that.
What I thought were the accepted definitions of the words, saying the moral thing to do is to donate $300 was the same as saying I ought to donate $300. In this definition, discussions of what was moral and what was not really carried more weight than just saying “yay donating $300!”
I didn’t say it was “the right thing” to do. I said it was was moral then what I am actually planning to do. You seem to just be assuming people are required to act in the way they find most moral. I don’t think this is a reasonable thing to ask of people.
Utilitarian conclusions clearly contain more info than “yay X.” Since they typically allow one to compare different positive options as to which is more positive. In addition in many contexts utilitarianism gives you a framework for debating what to do. Many people will agree the primary goal of laws in the USA should be to maximize utility for US citizens/residents as long as the law won’t dramatically harm non-residents (some libertarians disagree but I am just making a claim on what people think). Under these conditions utilitarianism tells you what to do.
Utilitarianism does not tell you how to act in daily life. Since its unclear how much you should weigh the morality of an action against other concerns.
A moral theory that doesn’t tell you how to act in daily life seems incomplete, at least in comparison to e.g. deontological approaches. If one defines a moral framework as something that does tell you how to act in daily life, as I suspect many of the people you’re thinking of do, then to the extent that utilitarianism is a moral framework, it requires extreme self-sacrifice (because the only, or at least most obvious, way to interpret utilitarianism as something that does tell you how to act in daily life is to interpret it as saying that you are required to act in the way that maximizes utility).
So on some level it’s just an argument about definitions, but there is a real point: either utilitarianism requires this extreme self-sacrifice, or it is something substantially less useful in daily life than deontology or virtue ethics.
Preferences of this sort might be interesting not because they describe what their holders will do themselves, but because they describe what their holders will try to get other people to do. I might think that diverting funds from luxury purchases to starving Africans is always morally good but not care enough (or not have enough moral backbone, or whatever) to divert much of my own money that way—but I might e.g. consistently vote for politicians who do, or choose friends who do, or argue for doing it, or something.
Nope. Real human beings are hypocrites, to some extent, pretty much all the time.
But holding a moral value and being hypocritical about it is different from not holding it at all, so I don’t think it’s correct to say that moral values held hypocritically are uninteresting or meaningless or anything like that.
I’m seeing fundamental disagreement on what “moral” means.
In the Anglo Saxon tradition, what is moral is what you should or ought to do, where should and ought both entail a debt one has the obligation to pay. Note that this doesn’t make morality binary; actions are more or less moral depending on how much of the debt you’re paying off. I wouldn’t be surprised if this varied a lot by culture, and I invite people to detail the similarities and differences in other cultures they are familiar with.
What I hear from some people here is Utilitarianism as a preference for certain states of the world, where there is no obligation to do anything—action to bring about those states is optional.
I think in the Anglo Saxon tradition, actions which fulfill preferences but are not obligatory would be considered praiseworthy or benevolent. Perhaps people would call them moral in terms of more than paying off your debt, but failing to “pay extra” would not be considered immoral.
Let’s call people who view morality as what is obligatory as Moralos, and people who view morality as what is preferable as Moralps.
Moralos will view Moralps as unjustly demanding and completely hypocritical—demanding payments on a huge debt, but only making tiny payments, if any, toward those debts themselves. Moralps will view Moralos as pretty much hateful—they don’t even prefer a better world, they want it to be worse.
This looks very familiar to me.
Haidt should really add questions to his poll to get at just what morality means to people, in particular in terms of obligation.
This makes sense… and the idea of ‘praiseworthy/benevolent’ shows that Moralos do have the concept of a full ranking.
So we could look at this as Moralos having a ranking plus an ‘obligation rule’ that tells you how good an outcome you’re obligated to achieve in a given situation, while Moralps don’t accept such a rule and instead just play it by ear.
Justifying an obligation rule seems philosophically tough… unless you justify it as a heuristic, in which case you get to think like a Moralp and act like a Moralo, and abandon your heuristic if it seems like it’s breaking down. Taking Giving What We Can’s 10% pledge is a good example of adopting such a heuristic.
Maybe, but it’s a very common moral intuition, so anything that purports to be a theory of human morality ought to explain it, or at least explain why we would misperceive that the distinction between obligatory and praiseworthy-but-non-obligatory actions exists.
Is heuristic value not a sufficient explanation of the intuition?
I don’t see the heuristic value. We don’t perceive people as being binarily e.g. either attractive or unattractive, friendly or unfriendly, reliable or unreliable; even though we often had to make snap judgements about these attributes, on matters of life and death, we still perceive them as being on a sliding scale. Why would moral vs. immoral be different?
It’d be fairer to compare to other properties of actions rather than properties of people; I think moral vs. immoral is also a sliding scale when applied to people.
That said, we do seem more attached to the binary of moral vs. immoral actions than, say, wise vs. unwise. My first guess is that this stems from a desire to orchestrate social responses to immoral action. From this hypothesis I predict that binary views of moral/immoral will be correlated with coordinated social responses to same.
Interesting; that may be a real difference in our intuitions. My sense is that unless I’m deliberately paying attention I tend to think of people quite binarily as either decent people or bad people.
Significantly more than you think of them binarily regarding those other categories? Then it is a real difference.
My view of people is that there are a few saints and a few cancers, and a big decent majority in between who sometimes fall short of obligations and sometimes exceed them depending on the situation. The ‘saint’ and ‘cancer’ categories are very small.
What do your ‘good’ and ‘bad’ categories look like, and what are their relative sizes?
I think of a large population of “decent”, who generically never do anything outright bad (I realise this is probably inaccurate, I’m talking about intuitions). There’s some variation within that category in terms of how much outright good they do, but that’s a lot less important. And then a smaller but substantial chunk, say 10%, of “bad” people, people who do outright bad things on occasion (and some variation in how frequently they do them, but again that’s much less important).
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I don’t think my concept of obligation is mysterious:
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations: You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
That’s almost rule consequentialism.
What buybuy said. Plus… Moralps are possibly hypocritical, but it could be that they are just wrong, claiming one preference but acting as if they have another. If I claim that I would never prefer a child to die so that I can buy a new car, and I then buy a new car instead of sending my money to feed starving children in wherever, then I am effectively making incorrect statements about my preferences, OR I am using the word preferences in a way that renders it uninteresting. Preferences are worth talking about precisely because to the extent that they describe what people will actually do.
I suspect in the case of starving children and cars, my ACTUAL preference is much more sentimental and much less universal. If I came home one day and laying on my lawn was a starving child, I would very likely feed that child even if this food came from a store I was keeping to trade for a new car. But if this child is around the corner and out of my sight, then its Tesla S time!
So Moralps are possibly hypocritical, but certainly wrong at describing their own preferences, IF we insist that preferences are things that dictate our volition.
Utilitarianism talks about which actions are more moral. It doesn’t talk about which actions a person actually “prefers.” I think its more moral to donate 300 dollars to charity than to take myself and two friends out for a Holiday diner. Yet I have reservations for Dec 28th. The fact I am actually spending the money on my friends and myself doesn’t mean I think this is the most moral things I could be doing.
I have never claimed people are required to optimize their actions in the pursuit of improving the world. So why would it be hypocritical for me not to try to maximize world utility.
So you are saying: “the right thing to do is donate $300 to charity but I don’t see why I should do that just because I think it is the right thing to do.”
Well once we start talking about the right thing to do without attaching any sense of obligation to doing that thing, I’d like to know what is the point about talking about morality at all. It seems it just becomes another way to say “yay donating $300!” and has no more meaning than that.
What I thought were the accepted definitions of the words, saying the moral thing to do is to donate $300 was the same as saying I ought to donate $300. In this definition, discussions of what was moral and what was not really carried more weight than just saying “yay donating $300!”
I didn’t say it was “the right thing” to do. I said it was was moral then what I am actually planning to do. You seem to just be assuming people are required to act in the way they find most moral. I don’t think this is a reasonable thing to ask of people.
Utilitarian conclusions clearly contain more info than “yay X.” Since they typically allow one to compare different positive options as to which is more positive. In addition in many contexts utilitarianism gives you a framework for debating what to do. Many people will agree the primary goal of laws in the USA should be to maximize utility for US citizens/residents as long as the law won’t dramatically harm non-residents (some libertarians disagree but I am just making a claim on what people think). Under these conditions utilitarianism tells you what to do.
Utilitarianism does not tell you how to act in daily life. Since its unclear how much you should weigh the morality of an action against other concerns.
A moral theory that doesn’t tell you how to act in daily life seems incomplete, at least in comparison to e.g. deontological approaches. If one defines a moral framework as something that does tell you how to act in daily life, as I suspect many of the people you’re thinking of do, then to the extent that utilitarianism is a moral framework, it requires extreme self-sacrifice (because the only, or at least most obvious, way to interpret utilitarianism as something that does tell you how to act in daily life is to interpret it as saying that you are required to act in the way that maximizes utility).
So on some level it’s just an argument about definitions, but there is a real point: either utilitarianism requires this extreme self-sacrifice, or it is something substantially less useful in daily life than deontology or virtue ethics.
Preferences of this sort might be interesting not because they describe what their holders will do themselves, but because they describe what their holders will try to get other people to do. I might think that diverting funds from luxury purchases to starving Africans is always morally good but not care enough (or not have enough moral backbone, or whatever) to divert much of my own money that way—but I might e.g. consistently vote for politicians who do, or choose friends who do, or argue for doing it, or something.
Your comment reads to me like a perfect description of hypocrisy. Am I missing something?
Nope. Real human beings are hypocrites, to some extent, pretty much all the time.
But holding a moral value and being hypocritical about it is different from not holding it at all, so I don’t think it’s correct to say that moral values held hypocritically are uninteresting or meaningless or anything like that.