I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I don’t understand why is this an argument in favor of utilitarianism.
A bunch of people can agree to work towards pretty much anything, for example getting rid of the unclean/heretics/untermenschen/etc.
I think you are taking this sentence out of context. I am not trying to present an argument in favor of utilitarianism. I was trying to explain why empathy is not necessary for utilitarianism.
I interpreted the question as “Why (other than my empathy) should I try to maximize other people’s utility?”
You can entangle your own utility with other’s utility, so that what maximizes your utility also maximizes their utility and vice versa. Your terminal value does not change to maximizing other people’s utility, but it becomes a side effect.
So you are basically saying that sometimes it is in your own self-interest (“own utility”) to cooperate with other people. Sure, that’s a pretty obvious observation. I still don’t see how it leads to utilitarianism.
If you terminal value is still self-interest but it so happens that there is a side-effect of increasing other people’s utility—that doesn’t look like utilitarianism to me.
There’s no need to parse it anymore, I didn’t get your comment initially.
for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans’ happiness.
You can band together lots of people to work together towards the same utilitarianism.
i.e. change happiness-suffering to something else?
I don’t know how to parse that question.
I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I don’t understand why is this an argument in favor of utilitarianism.
A bunch of people can agree to work towards pretty much anything, for example getting rid of the unclean/heretics/untermenschen/etc.
I think you are taking this sentence out of context. I am not trying to present an argument in favor of utilitarianism. I was trying to explain why empathy is not necessary for utilitarianism.
I interpreted the question as “Why (other than my empathy) should I try to maximize other people’s utility?”
Right, and here is your answer:
I don’t understand why this is a reason “to maximize other people’s utility”.
You can entangle your own utility with other’s utility, so that what maximizes your utility also maximizes their utility and vice versa. Your terminal value does not change to maximizing other people’s utility, but it becomes a side effect.
So you are basically saying that sometimes it is in your own self-interest (“own utility”) to cooperate with other people. Sure, that’s a pretty obvious observation. I still don’t see how it leads to utilitarianism.
If you terminal value is still self-interest but it so happens that there is a side-effect of increasing other people’s utility—that doesn’t look like utilitarianism to me.
I was only trying to make the obvious observation.
Just trying to satisfy your empathy does not really look like pure utilitarianism either.
There’s no need to parse it anymore, I didn’t get your comment initially.
I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans’ happiness.
I agree in the short term, but many of my long term goals (e.g. not dying) require lots of cooperation.