I think that you have missed the point of the thought experiment. We can compare the utilities of a scenario without having to consider the mechanics that could produce each scenario. Just imagine that Omega comes up to you and says “You can choose which world I implement: A or A+.” Which one would you rather have Omega instantiate?
The key point of the paradox is that preferences seem to be circular, which is very bad. If U(A)<U(A+)<U(B-)<(B)<U(A), then the utility function is fundamentally broken. It doesn’t matter that there’s usually no way to get from B to A, or anything like that.
In the last line you put “<” in where mathematics dictates that there should be a “>”. Why have you gone against the rules of mathematics?
You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.
In the last line you put “<” in where mathematics dictates that there should be a “>”. Why have you gone against the rules of mathematics?
That’s my point! My entire point is that this circular ordering of utilities violates mathematical reasoning. The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A. (Dutch booking problem!) Most people do not consider “a world with the maximal number of people such that they are all still barely subsisting” to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increase preferability.
You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.
No, it’s not. It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness. I would rather live in a world with a billion people that are each rather happy. The feasibility of the world doesn’t matter—the resources involved are irrelevant—it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox.
“That’s my point! My entire point is that this circular ordering of utilities violates mathematical reasoning.”
It only violated it because you had wrongly put “<” where it should have been “>”. With that corrected, there is no paradox. If you stick to using the same basis for comparing the four scenarios, you never get a paradox (regardless of which basis you choose to use for all four). You only get something that superficially looks like a paradox by changing the basis of comparison for different pairs, and that’s cheating.
“The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A.”
Only on a different basis. That is not a paradox. (The word “paradox” is ambiguous though, so things that are confusing can be called paradoxes even though they can be resolved, but in philosophy/logic/mathematics, the only paradoxes that are of significance are the ones that have no resolution, if any such paradoxes actually exist.)
“Most people do not consider “a world with the maximal number of people such that they are all still barely subsisting” to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increasepreferability.”
That’s because most people intuitively go on the basis that there’s an optimal population size for a given amount of resources. If you want to do the four comparisons on that basis, you get the following: (A)>(A+)<(B-)=(B)<(A), and again there’s no paradox there. The only semblance of a paradox appears when you break the rules of mathematics by mixing the results of the two lots of analysis. Note too that you’re introducing misleading factors as soon as you talk about “barely subsisting”—that introduces the idea of great suffering, but that would lead to a happiness level <0 rather than >0. For the happiness level to be just above zero, the people must be just inside the range of a state of contentment.
“”You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.” --> “No, it’s not.”
If you stick to a single basis, you get this:-
8000 < 12000 < 14000 = 14000 > 8000
No paradox.
But, you may indeed be using a different basis from the different basis I’ve chosen (see below).
“It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness.”
Don’t let that blind you to the fact that it is not a paradox. There are a number of reasons why you might not like B, or a later example Z where happiness for each person is at Q0.00000...0000001, and one of them may be that your’re adding unstated conditions to happiness, such as the idea that if happiness is more spaced out, you’ll feel deprived of happiness during the long spacings between happy moments, or if there is only one happy moment reserved for you in total, you’ll feel sad after that moment has come because you know there won’t be another one coming, but for the stats to be correct, these would have to be populations of modified people who have been stripped of many normal human emotions. For real people to have a happiness level of a quantum of happiness in total, that would need to be an average where they actually have a lot of happiness in their lives—enough to keep at low levels the negative feeling of being deprived of happiness much of the rest of the time and to cancel out those negatives overall, which means they’re living good lives with some real happiness.
“I would rather live in a world with a billion people that are each rather happy.”
Well, if that isn’t driven by an intuitive recognition of there being optimal population sizes for a given amount of resources, you’re still switching to a different basis where you will eliminate people who are less happy in order to increase happiness of the survivors. So, why not go the whole hog and extend that to a world with just one person who is extremely happy but where total happiness is less than in any other scenario? Someone can then take your basis for choosing smaller populations with greater happiness for each individual and bring in the same fake paradox by making the illegal switch to a different basis to say that a population with a thousand people marginally less happy than that single ecstatic individual is self-evidently better, even though you’d rather be that single ecstatic person.
All you ever have with this paradox is an illegal mixing of two bases, such as using one which seeks maximum total happiness while the other seeks maximum happiness of a single individual. So, why is it that when you’re at one extreme you want to move away from it? The answer is that you recognise that there is a compromise position that is somehow better, and in seeking that, you’re bringing in undeclared conditions (such as the loneliness of the ecstatic individual which renders him less happy than the stated value, or the disappointing idea of many other people being deprived of happiness which could easily have been made available to them). If you declare all of those conditions, you will have a method for determining the best choice. Your failure to identify all your undeclared conditions does not make this a paradox—it merely demonstrates that your calculations are incomplete. When you attempt to do maths with half your numbers missing, you shouldn’t bet on your answers being reliable.
However, the main intuition that’s actually acting here is the one I identified at the top: that there is an optimal population size for a given amount of available resources, and if the population grows too big (and leaves people in grinding poverty), decline in happiness will accelerate towards zero and continue accelerating into the negative, while if the population grows too small, happiness of individuals also declines. Utilitarianism drives us towards optimal population size and not to ever-larger populations with ever-decreasing happiness, because more total happiness can always be generated by adjusting the population size over time until it becomes optimal.
That only breaks if you switch to a different scenario. Imagine that for case Z we have added trillions of unintelligent sentient devices which can only handle a maximum happiness of the single quantum of happiness that they are getting. They are content enough and the total happiness is greater than in an equivalent of case A where only a thousand unintelligent sentient devices exist, but where these devices can handle (and are getting) a happiness level of Q8. Is the universe better with just a thousand devices at Q8 or trillions of them at Q0.000000001? The answer is, it’s better to have trillions of them with less individual but greater total happiness. When you strip away all the unstated conditions, you find that utilitarianism works fine. There is no possible way to make these trillion devices feel happier, so reducing their population relative to the available amount of resources reduces total happiness instead of becoming more optimal, so it doesn’t feel wrong in the way that it does with humans.
“The feasibility of the world doesn’t matter—the resources involved are irrelevant—it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox.”
If you want a version with no involvement of resources, then use my version with the unintelligent sentient devices so that you aren’t bringing a host of unstated conditions along for the ride. There is no paradox regardless of how you cut the cake. All we see in the “paradox” is a woeful attempt at mathematics which wouldn’t get past a school maths teacher. You do not have a set of numbers that shows a paradox where you use the same basis throughout (as would be required for it to be a paradox).
I think that you have missed the point of the thought experiment. We can compare the utilities of a scenario without having to consider the mechanics that could produce each scenario. Just imagine that Omega comes up to you and says “You can choose which world I implement: A or A+.” Which one would you rather have Omega instantiate?
The key point of the paradox is that preferences seem to be circular, which is very bad. If U(A)<U(A+)<U(B-)<(B)<U(A), then the utility function is fundamentally broken. It doesn’t matter that there’s usually no way to get from B to A, or anything like that.
On the basis you just described, we actually have
U(A)<U(A+) : Q8x1000 < Q8x1000 + Q4x1000
U(A+)<U(B-) : Q8x1000 +Q4x1000 < Q7x2000
U(B-)=(B) : Q7x2000 = Q7x2000
(B)>U(A) : Q7x2000 > Q8x1000
In the last line you put “<” in where mathematics dictates that there should be a “>”. Why have you gone against the rules of mathematics?
You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.
That’s my point! My entire point is that this circular ordering of utilities violates mathematical reasoning. The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A. (Dutch booking problem!) Most people do not consider “a world with the maximal number of people such that they are all still barely subsisting” to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increase preferability.
No, it’s not. It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness. I would rather live in a world with a billion people that are each rather happy. The feasibility of the world doesn’t matter—the resources involved are irrelevant—it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox.
“That’s my point! My entire point is that this circular ordering of utilities violates mathematical reasoning.”
It only violated it because you had wrongly put “<” where it should have been “>”. With that corrected, there is no paradox. If you stick to using the same basis for comparing the four scenarios, you never get a paradox (regardless of which basis you choose to use for all four). You only get something that superficially looks like a paradox by changing the basis of comparison for different pairs, and that’s cheating.
“The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A.”
Only on a different basis. That is not a paradox. (The word “paradox” is ambiguous though, so things that are confusing can be called paradoxes even though they can be resolved, but in philosophy/logic/mathematics, the only paradoxes that are of significance are the ones that have no resolution, if any such paradoxes actually exist.)
“Most people do not consider “a world with the maximal number of people such that they are all still barely subsisting” to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increasepreferability.”
That’s because most people intuitively go on the basis that there’s an optimal population size for a given amount of resources. If you want to do the four comparisons on that basis, you get the following: (A)>(A+)<(B-)=(B)<(A), and again there’s no paradox there. The only semblance of a paradox appears when you break the rules of mathematics by mixing the results of the two lots of analysis. Note too that you’re introducing misleading factors as soon as you talk about “barely subsisting”—that introduces the idea of great suffering, but that would lead to a happiness level <0 rather than >0. For the happiness level to be just above zero, the people must be just inside the range of a state of contentment.
“”You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.” --> “No, it’s not.”
If you stick to a single basis, you get this:-
8000 < 12000 < 14000 = 14000 > 8000
No paradox.
But, you may indeed be using a different basis from the different basis I’ve chosen (see below).
“It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness.”
Don’t let that blind you to the fact that it is not a paradox. There are a number of reasons why you might not like B, or a later example Z where happiness for each person is at Q0.00000...0000001, and one of them may be that your’re adding unstated conditions to happiness, such as the idea that if happiness is more spaced out, you’ll feel deprived of happiness during the long spacings between happy moments, or if there is only one happy moment reserved for you in total, you’ll feel sad after that moment has come because you know there won’t be another one coming, but for the stats to be correct, these would have to be populations of modified people who have been stripped of many normal human emotions. For real people to have a happiness level of a quantum of happiness in total, that would need to be an average where they actually have a lot of happiness in their lives—enough to keep at low levels the negative feeling of being deprived of happiness much of the rest of the time and to cancel out those negatives overall, which means they’re living good lives with some real happiness.
“I would rather live in a world with a billion people that are each rather happy.”
Well, if that isn’t driven by an intuitive recognition of there being optimal population sizes for a given amount of resources, you’re still switching to a different basis where you will eliminate people who are less happy in order to increase happiness of the survivors. So, why not go the whole hog and extend that to a world with just one person who is extremely happy but where total happiness is less than in any other scenario? Someone can then take your basis for choosing smaller populations with greater happiness for each individual and bring in the same fake paradox by making the illegal switch to a different basis to say that a population with a thousand people marginally less happy than that single ecstatic individual is self-evidently better, even though you’d rather be that single ecstatic person.
All you ever have with this paradox is an illegal mixing of two bases, such as using one which seeks maximum total happiness while the other seeks maximum happiness of a single individual. So, why is it that when you’re at one extreme you want to move away from it? The answer is that you recognise that there is a compromise position that is somehow better, and in seeking that, you’re bringing in undeclared conditions (such as the loneliness of the ecstatic individual which renders him less happy than the stated value, or the disappointing idea of many other people being deprived of happiness which could easily have been made available to them). If you declare all of those conditions, you will have a method for determining the best choice. Your failure to identify all your undeclared conditions does not make this a paradox—it merely demonstrates that your calculations are incomplete. When you attempt to do maths with half your numbers missing, you shouldn’t bet on your answers being reliable.
However, the main intuition that’s actually acting here is the one I identified at the top: that there is an optimal population size for a given amount of available resources, and if the population grows too big (and leaves people in grinding poverty), decline in happiness will accelerate towards zero and continue accelerating into the negative, while if the population grows too small, happiness of individuals also declines. Utilitarianism drives us towards optimal population size and not to ever-larger populations with ever-decreasing happiness, because more total happiness can always be generated by adjusting the population size over time until it becomes optimal.
That only breaks if you switch to a different scenario. Imagine that for case Z we have added trillions of unintelligent sentient devices which can only handle a maximum happiness of the single quantum of happiness that they are getting. They are content enough and the total happiness is greater than in an equivalent of case A where only a thousand unintelligent sentient devices exist, but where these devices can handle (and are getting) a happiness level of Q8. Is the universe better with just a thousand devices at Q8 or trillions of them at Q0.000000001? The answer is, it’s better to have trillions of them with less individual but greater total happiness. When you strip away all the unstated conditions, you find that utilitarianism works fine. There is no possible way to make these trillion devices feel happier, so reducing their population relative to the available amount of resources reduces total happiness instead of becoming more optimal, so it doesn’t feel wrong in the way that it does with humans.
“The feasibility of the world doesn’t matter—the resources involved are irrelevant—it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox.”
If you want a version with no involvement of resources, then use my version with the unintelligent sentient devices so that you aren’t bringing a host of unstated conditions along for the ride. There is no paradox regardless of how you cut the cake. All we see in the “paradox” is a woeful attempt at mathematics which wouldn’t get past a school maths teacher. You do not have a set of numbers that shows a paradox where you use the same basis throughout (as would be required for it to be a paradox).