So we could look at this as Moralos having a ranking plus an ‘obligation rule’
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Justifying an obligation rule seems philosophically tough...
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
I don’t think my concept of obligation is mysterious:
So what is obligation? I think it’s what we call our willingness to coerce/punish
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
So my vision of the utilitarian project is essentially reductionist
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations:
You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
You do not admit any sense in which it would be ‘better’
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
There could be Moralos like that, but if we’re talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.
The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.
Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.
So what is obligation? I think it’s what we call our willingness to coerce/punish, up through the higher order of evaluation, and that’s similarly the core of what makes something a moral preference.
If you’re not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.
Is this truly the “Utilitarianism” proposed here? Just rewarding, and not punishing or coercing?
I’d feel less creeped out by Utilitarianism if that were so.
Let me zoom out a bit to explain where I’m coming from.
I’m not fully satisfied with any metaethics, and I feel like I’m making a not-so-well-justified leap of faith to believe in any morality. Given that that’s the case, I’d like to at least minimize the leap of faith. I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define ‘obligations’ as ‘things we’re willing to coerce you to do’, we can decide whether X is an obligation by asking “Do we prefer a society that coerces X, or one that doesn’t?”
*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
I don’t think my concept of obligation is mysterious:
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations: You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
That’s almost rule consequentialism.