I am interested in this, or possibly a different closely-related thing.
I accept the logical arguments underlying utilitarianism (“This is the morally right thing to do.”) but not the actionable consequences. (“Therefore, I should do this thing.”) I ‘protect’ only my social circle, and have never seen any reason why I should extend that.
To rephrase: I accept that utilitarianism is the correct way to extrapolate our moral intuitions into a coherent generalizable framework. I feel no ‘should’ about it—no need to apply that framework to myself—and feel no cognitive dissonance when I recognize that an action I wish to perform is immoral, if it hurts only people I don’t care about.
Ultimately I think that is the way all utilitarianism works. You define an in group of people who are important, effectively equivalently important to each other and possibly equivalently important to yourself.
For most modern utilitarians, the in-group is all humans. Some modern utilitarians put mammals with relatively complex nervous systems in the group, and for the most part become vegetarians. Others put everything with a nervous system in there and for the most part become vegans. Very darn few put all life forms in there as they would starve. Implicit in this is that all life forms would place negative utility on being killed to be eaten which may be reasonable or may be projection of human values on to non-human entities.
But logically it makes as much sense to shrink the group you are utilitarian about as to expand it. Only Americans seems like a popular one in the US when discussing immigration policy. Only my friends and family has a following. Only LA Raiders fans or Manchester United fans seems to also gather its proponents.
Around here, I think you find people trying to put all thinking things, even mechanical, in the in-group, perhaps only all conscious thinking things. Maybe the way to create a friendly AI would be to make sure the AI never values its own life more than it values its own death, then we would always be able to turn it off without it fighting back.
Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won’t accord the stranger the full benefit of your concern, just a partial benefit.
Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won’t accord the stranger the full benefit of your concern, just a partial benefit.
Oh, there are definitely gradations. I probably wouldn’t do this, even if I could get away with it. I don’t care enough about strangers to go out of my way to save them, but neither do I want to kill them. On the other hand, if it was a person I had an active dislike for, I probably would. All of which is basically irrelevant, since it presupposes the incredibly unlikely “if I thought I could get away with it”.
I used to think I thought that way, but then I had some opportunities to casually steal from people I didn’t know (and easily get away with it), but I didn’t. With that said, I pirate things all the time despite believing that doing so frequently harms the content owners a little.
I have taken that precise action against someone who mildly annoyed me. I remember it (and the perceived slight that motivated it), but feel no guilt over it.
Both of the above (=Bentham’s classical utilitarianism)
I mean this.
In any case, what answer do you expect?
I do not expect any specific answer.
What would constitute a valid reason?
For me personally, probably nothing, since, apparently, I neither really care about people (I guess I overintellectuallized my empathy), nor about pleasure and suffering. The question, however, was asked mostly to better understand other people.
What are the assumptions from which you want to derive this?
I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I don’t understand why is this an argument in favor of utilitarianism.
A bunch of people can agree to work towards pretty much anything, for example getting rid of the unclean/heretics/untermenschen/etc.
I think you are taking this sentence out of context. I am not trying to present an argument in favor of utilitarianism. I was trying to explain why empathy is not necessary for utilitarianism.
I interpreted the question as “Why (other than my empathy) should I try to maximize other people’s utility?”
You can entangle your own utility with other’s utility, so that what maximizes your utility also maximizes their utility and vice versa. Your terminal value does not change to maximizing other people’s utility, but it becomes a side effect.
So you are basically saying that sometimes it is in your own self-interest (“own utility”) to cooperate with other people. Sure, that’s a pretty obvious observation. I still don’t see how it leads to utilitarianism.
If you terminal value is still self-interest but it so happens that there is a side-effect of increasing other people’s utility—that doesn’t look like utilitarianism to me.
There’s no need to parse it anymore, I didn’t get your comment initially.
for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans’ happiness.
I guess the reason is maximizing one’s utility function, in general. Empathy is just one component of the utility function (for those agents who feel it).
If multiple agents share the same utility function, and they know it, it should make their cooperation easier, because they only have to agree on facts and models of the world; they don’t have to “fight” against each other.
Apparently, we mean different things by “utilitarianism”. I meant moral system whose terminal goal is to maximize pleasure and minimize suffering in the whole world, while you’re talking about agent’s utility function, which may have no regard for pleasure and suffering.
I agree, thought, that it makes sense to try to maximize one’s utility function, but to me it’s just egoism.
I suspect that most people already are utilitarians—albeit with implicit calculation of their utility function. In other words, they already figure out what they think is best and do that (if they thought something else was better, it’s what they’d do instead).
Are there any reasons for becoming utilitarian, other than to satisfy one’s empathy?
I am interested in this, or possibly a different closely-related thing.
I accept the logical arguments underlying utilitarianism (“This is the morally right thing to do.”) but not the actionable consequences. (“Therefore, I should do this thing.”) I ‘protect’ only my social circle, and have never seen any reason why I should extend that.
What does “the morally right thing to do” mean if not “the thing you should do”?
To rephrase: I accept that utilitarianism is the correct way to extrapolate our moral intuitions into a coherent generalizable framework. I feel no ‘should’ about it—no need to apply that framework to myself—and feel no cognitive dissonance when I recognize that an action I wish to perform is immoral, if it hurts only people I don’t care about.
Ultimately I think that is the way all utilitarianism works. You define an in group of people who are important, effectively equivalently important to each other and possibly equivalently important to yourself.
For most modern utilitarians, the in-group is all humans. Some modern utilitarians put mammals with relatively complex nervous systems in the group, and for the most part become vegetarians. Others put everything with a nervous system in there and for the most part become vegans. Very darn few put all life forms in there as they would starve. Implicit in this is that all life forms would place negative utility on being killed to be eaten which may be reasonable or may be projection of human values on to non-human entities.
But logically it makes as much sense to shrink the group you are utilitarian about as to expand it. Only Americans seems like a popular one in the US when discussing immigration policy. Only my friends and family has a following. Only LA Raiders fans or Manchester United fans seems to also gather its proponents.
Around here, I think you find people trying to put all thinking things, even mechanical, in the in-group, perhaps only all conscious thinking things. Maybe the way to create a friendly AI would be to make sure the AI never values its own life more than it values its own death, then we would always be able to turn it off without it fighting back.
Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won’t accord the stranger the full benefit of your concern, just a partial benefit.
Oh, there are definitely gradations. I probably wouldn’t do this, even if I could get away with it. I don’t care enough about strangers to go out of my way to save them, but neither do I want to kill them. On the other hand, if it was a person I had an active dislike for, I probably would. All of which is basically irrelevant, since it presupposes the incredibly unlikely “if I thought I could get away with it”.
I used to think I thought that way, but then I had some opportunities to casually steal from people I didn’t know (and easily get away with it), but I didn’t. With that said, I pirate things all the time despite believing that doing so frequently harms the content owners a little.
I have taken that precise action against someone who mildly annoyed me. I remember it (and the perceived slight that motivated it), but feel no guilt over it.
By utilitiarian you mean:
Caring about all people equally
Hedonism, i.e. caring about pleasure/pain
Both of the above (=Bentham’s classical utilitarianism)?
In any case, what answer do you expect? What would constitute a valid reason? What are the assumptions from which you want to derive this?
I mean this.
I do not expect any specific answer.
For me personally, probably nothing, since, apparently, I neither really care about people (I guess I overintellectuallized my empathy), nor about pleasure and suffering. The question, however, was asked mostly to better understand other people.
I don’t know any.
You can band together lots of people to work together towards the same utilitarianism.
i.e. change happiness-suffering to something else?
I don’t know how to parse that question.
I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I don’t understand why is this an argument in favor of utilitarianism.
A bunch of people can agree to work towards pretty much anything, for example getting rid of the unclean/heretics/untermenschen/etc.
I think you are taking this sentence out of context. I am not trying to present an argument in favor of utilitarianism. I was trying to explain why empathy is not necessary for utilitarianism.
I interpreted the question as “Why (other than my empathy) should I try to maximize other people’s utility?”
Right, and here is your answer:
I don’t understand why this is a reason “to maximize other people’s utility”.
You can entangle your own utility with other’s utility, so that what maximizes your utility also maximizes their utility and vice versa. Your terminal value does not change to maximizing other people’s utility, but it becomes a side effect.
So you are basically saying that sometimes it is in your own self-interest (“own utility”) to cooperate with other people. Sure, that’s a pretty obvious observation. I still don’t see how it leads to utilitarianism.
If you terminal value is still self-interest but it so happens that there is a side-effect of increasing other people’s utility—that doesn’t look like utilitarianism to me.
I was only trying to make the obvious observation.
Just trying to satisfy your empathy does not really look like pure utilitarianism either.
There’s no need to parse it anymore, I didn’t get your comment initially.
I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans’ happiness.
I agree in the short term, but many of my long term goals (e.g. not dying) require lots of cooperation.
I guess the reason is maximizing one’s utility function, in general. Empathy is just one component of the utility function (for those agents who feel it).
If multiple agents share the same utility function, and they know it, it should make their cooperation easier, because they only have to agree on facts and models of the world; they don’t have to “fight” against each other.
Apparently, we mean different things by “utilitarianism”. I meant moral system whose terminal goal is to maximize pleasure and minimize suffering in the whole world, while you’re talking about agent’s utility function, which may have no regard for pleasure and suffering.
I agree, thought, that it makes sense to try to maximize one’s utility function, but to me it’s just egoism.
I suspect that most people already are utilitarians—albeit with implicit calculation of their utility function. In other words, they already figure out what they think is best and do that (if they thought something else was better, it’s what they’d do instead).
Utilitarian =/= utility maximizer.