Is there a difference between utilitarianism and selfish egoism?
For utilitarianism, you need to choose a utility function. This is entirely based on your preferences: what you value, and who you value get weighed and summed to create your utility function. I don’t see how this differs from selfish egoism: you decide what and who you value, and take actions that maximize these values.
Each doctrine comes with a little brainwashing. Utilitarianism is usually introduced as summing “equally” between people, but we all know some arrangements of atoms are more equal than others. However, introducing it this way naturally leads people to look for cooperation and value others more, both of which increase their chance of surviving.
Ayn Rand was rather reactionary against religion and its associated sacrificial behavior, so selfish egoism is often introduced as a reaction:
When you die, everything is over for you. Therefore, your survival is paramount.
You get nothing out of sacrificing your values. Therefore, you should only do things that benefit you.
Kant claimed people are good only by their strength of will. Wanting to help someone is a selfish action, and therefore not good. Rand takes the more individually rational approach: wanting to help someone makes you good, while helping someone against your interests is self-destructive. To be fair to Kant, when most agents are highly irrational your society will do better with universal laws than moral anarchy. This is also probably why selfish egoism gets a bad rapport: even if you are a selfish egoist, you want to influence your society to be more Kantian. Or, at the very least, like those utilitarians. They at least claim to value others.
However, I think rational utilitarians really are the same as rational selfish egoists. A rational selfish egoist would choose to look for cooperation. When they have fundamental disagreements with cooperative others, they would modify their values to care more about their counterpart so they both win. In the utilitarian bias, it’s more difficult to realize when to change your utility function, while it’s a little easier with selfish egoism. After all, the most important thing is survival, not utility.
I think both philosophies are slightly wrong. You shouldn’t care about survival per se, but expected discounted future entropy (i.e. how well you proliferate). This will obviously drop to zero if you die, but having a fulfilling fifty years of experiences is probably more important than seventy years in a 2x2 box. Utility is merely a weight on your chances of survival, and thus future entropy. ClosedAI is close with their soft actor-critic, though they say it’s entropy-regularized reinforcement learning. In reality, all reinforcement learning is maximizing energy-regularized entropy.
For utilitarianism, you need to choose a utility function. This is entirely based on your preferences: what you value, and who you value get weighed and summed to create your utility function. I don’t see how this differs from selfish egoism: you decide what and who you value, and take actions that maximize these values.
I see a difference in the word “summed”. In practice this would probably mean things like cooperating in the Prisoner’s Dilemma (maximizing the sum of utility, rather than the utility of an individual player).
Is there a difference between utilitarianism and selfish egoism?
For utilitarianism, you need to choose a utility function. This is entirely based on your preferences: what you value, and who you value get weighed and summed to create your utility function. I don’t see how this differs from selfish egoism: you decide what and who you value, and take actions that maximize these values.
Each doctrine comes with a little brainwashing. Utilitarianism is usually introduced as summing “equally” between people, but we all know some arrangements of atoms are more equal than others. However, introducing it this way naturally leads people to look for cooperation and value others more, both of which increase their chance of surviving.
Ayn Rand was rather reactionary against religion and its associated sacrificial behavior, so selfish egoism is often introduced as a reaction:
When you die, everything is over for you. Therefore, your survival is paramount.
You get nothing out of sacrificing your values. Therefore, you should only do things that benefit you.
Kant claimed people are good only by their strength of will. Wanting to help someone is a selfish action, and therefore not good. Rand takes the more individually rational approach: wanting to help someone makes you good, while helping someone against your interests is self-destructive. To be fair to Kant, when most agents are highly irrational your society will do better with universal laws than moral anarchy. This is also probably why selfish egoism gets a bad rapport: even if you are a selfish egoist, you want to influence your society to be more Kantian. Or, at the very least, like those utilitarians. They at least claim to value others.
However, I think rational utilitarians really are the same as rational selfish egoists. A rational selfish egoist would choose to look for cooperation. When they have fundamental disagreements with cooperative others, they would modify their values to care more about their counterpart so they both win. In the utilitarian bias, it’s more difficult to realize when to change your utility function, while it’s a little easier with selfish egoism. After all, the most important thing is survival, not utility.
I think both philosophies are slightly wrong. You shouldn’t care about survival per se, but expected discounted future entropy (i.e. how well you proliferate). This will obviously drop to zero if you die, but having a fulfilling fifty years of experiences is probably more important than seventy years in a 2x2 box. Utility is merely a weight on your chances of survival, and thus future entropy. ClosedAI is close with their soft actor-critic, though they say it’s entropy-regularized reinforcement learning. In reality, all reinforcement learning is maximizing energy-regularized entropy.
I see a difference in the word “summed”. In practice this would probably mean things like cooperating in the Prisoner’s Dilemma (maximizing the sum of utility, rather than the utility of an individual player).
How do you choose to sum the utility when playing a Prisoner’s Dilemma against a rock?