This makes sense… and the idea of ‘praiseworthy/benevolent’ shows that Moralos do have the concept of a full ranking.
So we could look at this as Moralos having a ranking plus an ‘obligation rule’ that tells you how good an outcome you’re obligated to achieve in a given situation, while Moralps don’t accept such a rule and instead just play it by ear.
Justifying an obligation rule seems philosophically tough… unless you justify it as a heuristic, in which case you get to think like a Moralp and act like a Moralo, and abandon your heuristic if it seems like it’s breaking down. Taking Giving What We Can’s 10% pledge is a good example of adopting such a heuristic.
Justifying an obligation rule seems philosophically tough
Maybe, but it’s a very common moral intuition, so anything that purports to be a theory of human morality ought to explain it, or at least explain why we would misperceive that the distinction between obligatory and praiseworthy-but-non-obligatory actions exists.
I don’t see the heuristic value. We don’t perceive people as being binarily e.g. either attractive or unattractive, friendly or unfriendly, reliable or unreliable; even though we often had to make snap judgements about these attributes, on matters of life and death, we still perceive them as being on a sliding scale. Why would moral vs. immoral be different?
It’d be fairer to compare to other properties of actions rather than properties of people; I think moral vs. immoral is also a sliding scale when applied to people.
That said, we do seem more attached to the binary of moral vs. immoral actions than, say, wise vs. unwise. My first guess is that this stems from a desire to orchestrate social responses to immoral action. From this hypothesis I predict that binary views of moral/immoral will be correlated with coordinated social responses to same.
I think moral vs. immoral is also a sliding scale when applied to people.
Interesting; that may be a real difference in our intuitions. My sense is that unless I’m deliberately paying attention I tend to think of people quite binarily as either decent people or bad people.
Significantly more than you think of them binarily regarding those other categories? Then it is a real difference.
My view of people is that there are a few saints and a few cancers, and a big decent majority in between who sometimes fall short of obligations and sometimes exceed them depending on the situation. The ‘saint’ and ‘cancer’ categories are very small.
What do your ‘good’ and ‘bad’ categories look like, and what are their relative sizes?
I think of a large population of “decent”, who generically never do anything outright bad (I realise this is probably inaccurate, I’m talking about intuitions). There’s some variation within that category in terms of how much outright good they do, but that’s a lot less important. And then a smaller but substantial chunk, say 10%, of “bad” people, people who do outright bad things on occasion (and some variation in how frequently they do them, but again that’s much less important).
So we could look at this as Moralos having a ranking plus an ‘obligation rule’
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Justifying an obligation rule seems philosophically tough...
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
I don’t think my concept of obligation is mysterious:
So what is obligation? I think it’s what we call our willingness to coerce/punish
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
So my vision of the utilitarian project is essentially reductionist
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations:
You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
You do not admit any sense in which it would be ‘better’
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
This makes sense… and the idea of ‘praiseworthy/benevolent’ shows that Moralos do have the concept of a full ranking.
So we could look at this as Moralos having a ranking plus an ‘obligation rule’ that tells you how good an outcome you’re obligated to achieve in a given situation, while Moralps don’t accept such a rule and instead just play it by ear.
Justifying an obligation rule seems philosophically tough… unless you justify it as a heuristic, in which case you get to think like a Moralp and act like a Moralo, and abandon your heuristic if it seems like it’s breaking down. Taking Giving What We Can’s 10% pledge is a good example of adopting such a heuristic.
Maybe, but it’s a very common moral intuition, so anything that purports to be a theory of human morality ought to explain it, or at least explain why we would misperceive that the distinction between obligatory and praiseworthy-but-non-obligatory actions exists.
Is heuristic value not a sufficient explanation of the intuition?
I don’t see the heuristic value. We don’t perceive people as being binarily e.g. either attractive or unattractive, friendly or unfriendly, reliable or unreliable; even though we often had to make snap judgements about these attributes, on matters of life and death, we still perceive them as being on a sliding scale. Why would moral vs. immoral be different?
It’d be fairer to compare to other properties of actions rather than properties of people; I think moral vs. immoral is also a sliding scale when applied to people.
That said, we do seem more attached to the binary of moral vs. immoral actions than, say, wise vs. unwise. My first guess is that this stems from a desire to orchestrate social responses to immoral action. From this hypothesis I predict that binary views of moral/immoral will be correlated with coordinated social responses to same.
Interesting; that may be a real difference in our intuitions. My sense is that unless I’m deliberately paying attention I tend to think of people quite binarily as either decent people or bad people.
Significantly more than you think of them binarily regarding those other categories? Then it is a real difference.
My view of people is that there are a few saints and a few cancers, and a big decent majority in between who sometimes fall short of obligations and sometimes exceed them depending on the situation. The ‘saint’ and ‘cancer’ categories are very small.
What do your ‘good’ and ‘bad’ categories look like, and what are their relative sizes?
I think of a large population of “decent”, who generically never do anything outright bad (I realise this is probably inaccurate, I’m talking about intuitions). There’s some variation within that category in terms of how much outright good they do, but that’s a lot less important. And then a smaller but substantial chunk, say 10%, of “bad” people, people who do outright bad things on occasion (and some variation in how frequently they do them, but again that’s much less important).
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I don’t think my concept of obligation is mysterious:
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations: You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
That’s almost rule consequentialism.