You’ve no doubt heard of the Repugnant Conclusion before. Well let me introduce you to it’s older cousin who rides a motorbike and has a steroid addiction. Here are 6 common sense conditions that can’t be achieved simultaneously (tweaked for readability). I first encountered this theorem in Yampolskiy’s “Uncontrollability of AI”
Given some rule for assigning a total welfare value to any population, you can’t find a way to satisfy all of the first 3 principles whilst avoiding the final 3 conclusions.
The Dominance Principle: If every member of population A has better welfare than every member of population B , then A should be superior to B. If populations A and B are the same nonzero size and every member of population A has better welfare than every member of population B, then A should be superior to B (Thanks to Donald Hobson for this correction)
The Addition Principle: Adding more happy people to our population increases it’s total value.
The Minimal Non-Extreme Priority Principle: There exists some number such that adding that number of extremely happy people plus a single slightly sad person is better than adding the same number of slightly happy people. I think of this intuitively as making some amount of people very happy outweighs making a single person slightly sad.
The Repugnant Conclusion: Any population with very high levels of happiness is worse than some second larger population of people with very low happiness.
The Sadistic Conclusion: It is better to add individuals to the population with negative welfare than positive welfare.
The Anti-Egalitarian Conclusion: For any perfectly equal population, there is an unequal society of the same size with lower average welfare that is considered better.
>If populations A and B are the same nonzero size and every member of population A has better welfare than every member of population B, then A should be superior to B.
Otherwise it is excessively strong, and for example claims that 1 extremely happy person is better than a gazillion quite happy people.
(And pedantically, there are all sorts of weirdness happening at population 0)
Interesting, 2 seems the most intuitively obvious to me. Holding everyone elses happiness equal and adding more happy people seems like it should be viewed as a net positive.
To better see why 3 is a positive, think about it as taking away a lot of happy people to justify taking away a single, only slightly sad individual.
6 is undesirable because you are putting a positive value on inequality for no extra benefit.
It doesn’t say “equally happy people”. It just says “happy people”. So a billion population might be living in a utopia, and then you add a trillion people who are just barely rating their life positively instead of negatively (without adversely affecting the billion in utopia), and principle 2 says that you must rate this society as better than the one in which everyone is living in utopia.
I don’t see a strong justification for this. I can see arguments for it, but they’re not at all compelling to me.
I completely disagree that “taking people away” is at all equivalent. Path-dependence matters.
To me it seems rather obvious that we should jettison number 3. There is no excuse for creating more suffering under any circumstances. The ones who walked away from Omelas were right to do so. I suppose this makes me a negative utilitarian, but I think, along with David Pearce, that the total elimination of suffering is entirely possible, and desirable. (Actually, reading Noosphere89′s comment, I think it makes me a deontologist. But then, I’ve been meaning to make a “Why I no longer identify as a consequentialist” post for a while now...)
Number 6 is the likeliest condition to be accepted by a lot of people in practice, and the acceptance of Condition 6 is basically one of the pillars of capitalism. Only the very far left would view this condition with a negative attitude, people like communists or socialists.
Number 5 is a condition that possibly is accepted to conservativion efforts/environmentalist/nature movements, and acceptance of condition number 5 are likely due to different focuses. It’s an unintentional tradeoff, but it’s one of the best examples of a tradeoff in ethical goals.
Condition 4 is essentially accepting a pro-natalist position.
Pretty much anyone would prefer “a society with higher rather than lower average wellbeing”, if that’s all they’re told about these hypothetical societies, they don’t think about any of the implications, and their attention is not drawn to the things (as in the impossibility theorem) that they will have to trade off against each other.
Condition 6 is stronger than that, in that everyone must essentially have equivalent welfare, and only the communists/socialists would view it as an ideal to aspire to. It’s not just higher welfare, but the fact that the welfare must be equal, equivalently, there aren’t utility monsters in the population.
I think that if the alternative was A) lots of people having low welfare and a very small group of people having very high welfare, or B) everyone having pretty good welfare… then quite a few people would prefer B.
The chart that Arrhenius uses to first demonstrate Condition 6 is this:
In that chart, A has only a single person β who has very high welfare, and a significant group of people γ with low (though still positive) welfare. The people α have the same (pretty high) welfare as everyone in world B. Accepting condition 6 involves choosing A over B, even though B would offer greater or the same welfare to everyone except person β.
This sounds like the most contested condition IRL, and as I stated, capitalists, libertarians, and people who are biased towards freedom liking views would prefer the first, and centre right/right wing views would prefer the first scenario the centre left being biased towards the second, and farther left groups supporting the second scenario.
In essence, this captures the core of a lot of political debates/moral debates: Whether utility monsters should be allowed, or conversely should we try to make things as equal as possible.?
This is intended to be descriptive, not prescriptive.
A neat idea from Welfare Axiology
Arrhenius’s Impossibility Theorem
You’ve no doubt heard of the Repugnant Conclusion before. Well let me introduce you to it’s older cousin who rides a motorbike and has a steroid addiction. Here are 6 common sense conditions that can’t be achieved simultaneously (tweaked for readability). I first encountered this theorem in Yampolskiy’s “Uncontrollability of AI”
Arrhenius’s Impossibility Theorem
Given some rule for assigning a total welfare value to any population, you can’t find a way to satisfy all of the first 3 principles whilst avoiding the final 3 conclusions.
The Dominance Principle:
If every member of population A has better welfare than every member of population B , then A should be superior to B.If populations A and B are the same nonzero size and every member of population A has better welfare than every member of population B, then A should be superior to B
(Thanks to Donald Hobson for this correction)
The Addition Principle:
Adding more happy people to our population increases it’s total value.
The Minimal Non-Extreme Priority Principle:
There exists some number such that adding that number of extremely happy people plus a single slightly sad person is better than adding the same number of slightly happy people. I think of this intuitively as making some amount of people very happy outweighs making a single person slightly sad.
The Repugnant Conclusion:
Any population with very high levels of happiness is worse than some second larger population of people with very low happiness.
The Sadistic Conclusion:
It is better to add individuals to the population with negative welfare than positive welfare.
The Anti-Egalitarian Conclusion:
For any perfectly equal population, there is an unequal society of the same size with lower average welfare that is considered better.
You have made a mistake.
principle 1 should read
>If populations A and B are the same nonzero size and every member of population A has better welfare than every member of population B, then A should be superior to B.
Otherwise it is excessively strong, and for example claims that 1 extremely happy person is better than a gazillion quite happy people.
(And pedantically, there are all sorts of weirdness happening at population 0)
Thank you for pointing this out!
Principles 2 and 3 don’t seem to have any strong justification, with 3 being very weak.
If the 3 principles were all adopted for some reason, then conclusion 6 doesn’t seem very bad.
Interesting, 2 seems the most intuitively obvious to me. Holding everyone elses happiness equal and adding more happy people seems like it should be viewed as a net positive.
To better see why 3 is a positive, think about it as taking away a lot of happy people to justify taking away a single, only slightly sad individual.
6 is undesirable because you are putting a positive value on inequality for no extra benefit.
But I agree, 6 is probably the one to go.
It doesn’t say “equally happy people”. It just says “happy people”. So a billion population might be living in a utopia, and then you add a trillion people who are just barely rating their life positively instead of negatively (without adversely affecting the billion in utopia), and principle 2 says that you must rate this society as better than the one in which everyone is living in utopia.
I don’t see a strong justification for this. I can see arguments for it, but they’re not at all compelling to me.
I completely disagree that “taking people away” is at all equivalent. Path-dependence matters.
If you check the paper the form of welfare rankings discussed by Arrhenius’s appears to be path independent.
Sure—there are other premises in there that I disagree with as well.
To me it seems rather obvious that we should jettison number 3. There is no excuse for creating more suffering under any circumstances. The ones who walked away from Omelas were right to do so. I suppose this makes me a negative utilitarian, but I think, along with David Pearce, that the total elimination of suffering is entirely possible, and desirable. (Actually, reading Noosphere89′s comment, I think it makes me a deontologist. But then, I’ve been meaning to make a “Why I no longer identify as a consequentialist” post for a while now...)
Number 6 is the likeliest condition to be accepted by a lot of people in practice, and the acceptance of Condition 6 is basically one of the pillars of capitalism. Only the very far left would view this condition with a negative attitude, people like communists or socialists.
Number 5 is a condition that possibly is accepted to conservativion efforts/environmentalist/nature movements, and acceptance of condition number 5 are likely due to different focuses. It’s an unintentional tradeoff, but it’s one of the best examples of a tradeoff in ethical goals.
Condition 4 is essentially accepting a pro-natalist position.
Premise 3 is also not accepted by dentologists.
I don’t think that you need to be very far left to prefer a society with higher rather than lower average wellbeing.
Pretty much anyone would prefer “a society with higher rather than lower average wellbeing”, if that’s all they’re told about these hypothetical societies, they don’t think about any of the implications, and their attention is not drawn to the things (as in the impossibility theorem) that they will have to trade off against each other.
Condition 6 is stronger than that, in that everyone must essentially have equivalent welfare, and only the communists/socialists would view it as an ideal to aspire to. It’s not just higher welfare, but the fact that the welfare must be equal, equivalently, there aren’t utility monsters in the population.
I think that if the alternative was A) lots of people having low welfare and a very small group of people having very high welfare, or B) everyone having pretty good welfare… then quite a few people would prefer B.
The chart that Arrhenius uses to first demonstrate Condition 6 is this:
In that chart, A has only a single person β who has very high welfare, and a significant group of people γ with low (though still positive) welfare. The people α have the same (pretty high) welfare as everyone in world B. Accepting condition 6 involves choosing A over B, even though B would offer greater or the same welfare to everyone except person β.
This sounds like the most contested condition IRL, and as I stated, capitalists, libertarians, and people who are biased towards freedom liking views would prefer the first, and centre right/right wing views would prefer the first scenario the centre left being biased towards the second, and farther left groups supporting the second scenario.
In essence, this captures the core of a lot of political debates/moral debates: Whether utility monsters should be allowed, or conversely should we try to make things as equal as possible.?
This is intended to be descriptive, not prescriptive.