I’d rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.
I don’t think my concept of obligation is mysterious:
So what is obligation? I think it’s what we call our willingness to coerce/punish
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
So my vision of the utilitarian project is essentially reductionist
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations:
You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
You do not admit any sense in which it would be ‘better’
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
I don’t think my concept of obligation is mysterious:
Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming.
Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos.
I think you’re making a good start in any project by first taking a reductionist view. What are we really talking about, when we’re talking about morality?
I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power.
My project, such as it was, was an egoist project. OK, I don’t have to be a slave to moral mumbo jumbo. What now? What’s going on with morality?
What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
Interesting, our differences are deeper than I expected!
Do you feel you have a good grip on my foundations, or is there something I should expand on?
Let me check my understanding of your foundations: You make decisions to satisfy your own preferences. Some of these might be ‘social preferences’, which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be ‘better’ to donate more of your income than you want to, except perhaps by admitting meta-preferences like “I would prefer if I had a stronger preference for fewer malaria deaths”.
When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they’re obligated and I say they aren’t, we haven’t actually contradicted each other).
Is the above a correct description of your approach?
It’s not just me. This is my model of human moral activity. We’re social animals with some built in social preferences, along with other built in preferences.
I could come up with a zillion different “betters” where that was the case, but that doesn’t mean that I find it better overall according to my values.
That’s too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn’t want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite.
That’s the thing. The reality of our preferences is that they weren’t designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won’t adequately model what humans are, no matter how intellectually appealing.
We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well—such as if you don’t disapprove of someone who doesn’t disapprove of low status.
In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won’t be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.