Upvoted for being the kind of post I want on LessWrong, but I agree with the posters above who say that you misunderstand the point of the paradox. Thrasymachus articulates why most clearly. You do however make a compelling argument that even if we accept that A<Z we should still spend some resources on increasing happiness. The hypothetical Z presumes more resources than we have. Given that we can’t reach Z even by using all our resources, knowing A<Z isn’t doesn’t tell us anything because Z isn’t one of our options. If we spent all our resources on population growth we’d only achieve Z-, a smaller population than Z with the same happiness, this might we be worse than A.
EDIT: Not that I accept A<Z. I resolve the non-transitivity by taking A+<A.
What I actually value is average happiness. All else being equal, I don’t think adding people whose lives are just worth living is a good thing. (Often all else is not equal, I do support adding more people if it will create more interesting diversity, for example).
I don’t quite understand your example, what does “P2-20” mean? I’d also need to know the populations. Anyway, I think your point is that we can increase the happiness of P1 as we go from A to A+. In that case we might well have A<A+, but then we would have B<A+ also.
Sorry, P2-20 means 19 persons all at 8 units of welfare. The idea was to intuition pump the person affecting restriction: A+ is now strictly better for everyone, including the person who was in A, and so it might be more intuitively costly to say it is in fact A>A+
You may well have thought about all the ‘standard’ objections to average util in population ethics cases, but just in case not:
Average util seems to me implausible, particularly in different number cases: for example why bringing lives into existence which are positive (even really positive) would be wrong just because they would be below the average of the lives who already exist.
Related to averaging is dealing with separability: if we’re just averaging all-person happiness, than whether it is a good thing to bring a person on earth will depend on the wellbeing of aliens in the next super-cluster (if they’re happier than us, then anti-natalism seems to follow). Biting the bullet here seems really costly, and I’m not sure what other answers one could give. If you have some in mind, please let me know!
It also means that if the average value of a population is below zero, adding more lives that are below zero (but not as far below zero as the average of the population) is a good thing to do.
Cool. I don’t really believe average happiness either (but I’m a lot closer to it than valuing total happiness). I wouldn’t steal from the poor to give to the rich, even if the rich are more effective at using resources.
Cool. I don’t really believe average happiness either (but I’m a lot closer to it than valuing total happiness).
I think that saying that “I value improving the lives of those who already exist,” is a good way to articulate your desire to increase average utility, but also spell out the fact that you find it bad to increase it by other means, like killing unhappy people.
It also articulates the fact that you would (I assume) be opposed to creating a person who is tortured 23 hours a day in a world filled completely with people being tortured 24 hours a day, even though that would increase average utility.
I also assume that while you believe in something like average utility, you don’t think that a universe with only one person with a utility of 100 is just as morally good as a universe with a trillion people who each have a utility of 100. So you probably also value having more people to some extent, even if you value it incrementally much less than average utility (I refer to this value as “number of worthwhile lives”).
I wouldn’t steal from the poor to give to the rich, even if the rich are more effective at using resources.
It sounds like you must also value equality for its own sake, rather than as a side-effect of diminishing marginal utility. I think I am also coming around to this way of thinking. I don’t think equality is infinitely valuable, of course, it needs to be traded off against other values. But I do think that, for example, a world where people are enslaved to a utility monster is probably worse than one where they are free, even if that diminishes total aggregate utility.
In fact, I’m starting to wonder if total utility is a terminal value, or if increasing it is just a side effect of wanting to simultaneously increase average utility and the number of worthwhile lives.
(Apart from: I wouldn’t say that I was maximising others utilty. I’d say I was maximising their happiness, freedom, fulfilment, etc. A utility function is an abstract mathematical thing. We can prove that rational agents behave as if they were trying to maximise some utility function. Since I’m trying to be a rational agent I try and make sure my ideas are consistent with a utility function, and so I sometimes talk of “my utility function”.
But when I consider other people I don’t value their utility functions. I just directly value their happiness, freedom, fulfilment, and so on. I don’t value their utility functions because, One, they’re not rational and so they don’t have utility functions. Two, valuing each other’s utility would lead to difficult self-reference. But mostly Three, on introspection I really do just value their happiness, freedom, fulfilment, etc. and not their utility.
The sense in which they do have utility is that each contributes utility to me. But then there’s no such thing as “an individual’s utility” because (as we’ve seen) the utility other people give to me is a combined function of all of their happiness, freedom, fulfilment, and so on.)
Apart from: I wouldn’t say that I was maximising others utilty. I’d say I was maximising their happiness, freedom, fulfilment, etc
I think I understand. I tend to use the word “utility” to mean something like “the sum total of everything a person values.” Your use is probably more precise, and closer to the original meaning.
I also get very nervous of the idea of maximizing utility because I believe wholeheartedly that value is complex. So if we define utility too narrowly and then try to maximize it we might lose something important. So right now I try to “increase” or “improve” utility rather than maximize it.
Upvoted for being the kind of post I want on LessWrong, but I agree with the posters above who say that you misunderstand the point of the paradox. Thrasymachus articulates why most clearly. You do however make a compelling argument that even if we accept that A<Z we should still spend some resources on increasing happiness. The hypothetical Z presumes more resources than we have. Given that we can’t reach Z even by using all our resources, knowing A<Z isn’t doesn’t tell us anything because Z isn’t one of our options. If we spent all our resources on population growth we’d only achieve Z-, a smaller population than Z with the same happiness, this might we be worse than A.
EDIT: Not that I accept A<Z. I resolve the non-transitivity by taking A+<A.
That’s really interesting. Why?
And would you also take A+ < A if we fiddled the numbers to get.
A: P1 at 10
A+: P1 at 20, P2-20 at 8
B: P1-20 at 9
So we can still get to the RP, yet A+ seems a really good deal versus A.
What I actually value is average happiness. All else being equal, I don’t think adding people whose lives are just worth living is a good thing. (Often all else is not equal, I do support adding more people if it will create more interesting diversity, for example).
I don’t quite understand your example, what does “P2-20” mean? I’d also need to know the populations. Anyway, I think your point is that we can increase the happiness of P1 as we go from A to A+. In that case we might well have A<A+, but then we would have B<A+ also.
Sorry, P2-20 means 19 persons all at 8 units of welfare. The idea was to intuition pump the person affecting restriction: A+ is now strictly better for everyone, including the person who was in A, and so it might be more intuitively costly to say it is in fact A>A+
You may well have thought about all the ‘standard’ objections to average util in population ethics cases, but just in case not:
Average util seems to me implausible, particularly in different number cases: for example why bringing lives into existence which are positive (even really positive) would be wrong just because they would be below the average of the lives who already exist.
Related to averaging is dealing with separability: if we’re just averaging all-person happiness, than whether it is a good thing to bring a person on earth will depend on the wellbeing of aliens in the next super-cluster (if they’re happier than us, then anti-natalism seems to follow). Biting the bullet here seems really costly, and I’m not sure what other answers one could give. If you have some in mind, please let me know!
[Second anti-average util example]:
It also means that if the average value of a population is below zero, adding more lives that are below zero (but not as far below zero as the average of the population) is a good thing to do.
ie. Death To All The Whiners! Be happy or die!
Each death adds it’s own negative utility. Death is worse than the difference in utilities between the situations before and after the death.
It sounds like you may have similar actual preference as I. (I just wouldn’t dream of calling it “average happiness”.)
Cool. I don’t really believe average happiness either (but I’m a lot closer to it than valuing total happiness). I wouldn’t steal from the poor to give to the rich, even if the rich are more effective at using resources.
I think that saying that “I value improving the lives of those who already exist,” is a good way to articulate your desire to increase average utility, but also spell out the fact that you find it bad to increase it by other means, like killing unhappy people.
It also articulates the fact that you would (I assume) be opposed to creating a person who is tortured 23 hours a day in a world filled completely with people being tortured 24 hours a day, even though that would increase average utility.
I also assume that while you believe in something like average utility, you don’t think that a universe with only one person with a utility of 100 is just as morally good as a universe with a trillion people who each have a utility of 100. So you probably also value having more people to some extent, even if you value it incrementally much less than average utility (I refer to this value as “number of worthwhile lives”).
It sounds like you must also value equality for its own sake, rather than as a side-effect of diminishing marginal utility. I think I am also coming around to this way of thinking. I don’t think equality is infinitely valuable, of course, it needs to be traded off against other values. But I do think that, for example, a world where people are enslaved to a utility monster is probably worse than one where they are free, even if that diminishes total aggregate utility.
In fact, I’m starting to wonder if total utility is a terminal value, or if increasing it is just a side effect of wanting to simultaneously increase average utility and the number of worthwhile lives.
Agreed on all counts.
(Apart from: I wouldn’t say that I was maximising others utilty. I’d say I was maximising their happiness, freedom, fulfilment, etc. A utility function is an abstract mathematical thing. We can prove that rational agents behave as if they were trying to maximise some utility function. Since I’m trying to be a rational agent I try and make sure my ideas are consistent with a utility function, and so I sometimes talk of “my utility function”.
But when I consider other people I don’t value their utility functions. I just directly value their happiness, freedom, fulfilment, and so on. I don’t value their utility functions because, One, they’re not rational and so they don’t have utility functions. Two, valuing each other’s utility would lead to difficult self-reference. But mostly Three, on introspection I really do just value their happiness, freedom, fulfilment, etc. and not their utility.
The sense in which they do have utility is that each contributes utility to me. But then there’s no such thing as “an individual’s utility” because (as we’ve seen) the utility other people give to me is a combined function of all of their happiness, freedom, fulfilment, and so on.)
I think I understand. I tend to use the word “utility” to mean something like “the sum total of everything a person values.” Your use is probably more precise, and closer to the original meaning.
I also get very nervous of the idea of maximizing utility because I believe wholeheartedly that value is complex. So if we define utility too narrowly and then try to maximize it we might lose something important. So right now I try to “increase” or “improve” utility rather than maximize it.