There’s no need to parse it anymore, I didn’t get your comment initially.
for the same reason they can agree to cooperate in the repeated prisoner’s dilemma.
I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans’ happiness.
There’s no need to parse it anymore, I didn’t get your comment initially.
I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans’ happiness.
I agree in the short term, but many of my long term goals (e.g. not dying) require lots of cooperation.