I have probably heard those arguments, but the particular formulation you mention appear to be embedded in a book of ethical philosophy, so I can’t check, because I haven’t got a lot of time or money for reading whole ethical philosophy books. I think that’s a mostly doomed approach that nobody should spend too much time on.
I looked at the Wikipedia summary, for whatever that’s worth, and here are my standard responses to what’s in there:
I reject the idea that I only get to assign value to people and their quality of life, and don’t get to care about other aspects of the universe in which they’re embedded and of their effects on it. I am, if you push the scenario hard enough, literally willing to value maintaining a certain amount of VOID, sort of a “void preserve”, if you will, over adding more people. And it gets even hairier if you start asking difficult questions about what counts as a “person” and why. And if you broaden your circle of concern enough, it starts to get hard to explain why you give equal weight to everything inside it.
Even if you do restrict yourself only to people, which again I don’t, step 1, from A to A+, doesn’t exactly assume that you can always add a new group of people without in any way affecting the old ones, but seems to tend to encourage thinking that way, which is not necessarily a win.
Step 2, where “total and average happiness increase” from A+ to B-, is the clearest example of how the whole argument requires aggregating happiness… and it’s not a valid step. You can’t legitimately talk about, let alone compute, “total happiness”, “average happiness”, “maximum happiness”, or indeed ANYTHING that requires you put two or more people’s happiness on the same scale. You may not even be able to do it for one person. At MOST you can impose a very weak partial ordering on states of the universe (I think that’s the sort of thing Pareto talked about, but again I don’t study this stuff...). And such a partial ordering doesn’t help at all when you’re trying to look at populations.
If you could aggregate or compare happiness, the way you did it wouldn’t necessarily be independent of things like how diverse various people’s happiness was; happiness doesn’t have to be a fungible commodity. As I said before, I’d probably rather create two significantly different happy people than a million identical “equally happy” people.
So I don’t accept that argument requires me to accept the repugnant conclusion on pain of having intransitive preferences.
That said, of course I do have some non-transitive preferences, or at least I’m pretty sure I do. I’m human, not some kind of VNM-thing. My preferences are going to depend on when you happen to ask me a question, how you ask it, and what particular consequences seem most salient. Sure, I often prefer to be consistent, and if I explicitly decided on X yesterday I’m not likely to choose Y tomorrow. Especially not if feel like maybe I’ve led somebody to depend on my previous choice. But consistency isn’t always going to control absolutely.
Even if it were possible, getting rid of all non-transitive preferences, or even all revealed non-transitive preferences, would demand deeply rewriting my mind and personality, and I do not at this time wish to do that, or at least not in that way. It’s especially unappealing because every set of presumably transitive preferences that people suggest I adopt seems to leave me preferring one or another kind of intuitively crazy outcome, and I believe that’s probably going to be true of any consistent system.
My intuitions conflict, because they were adopted ad-hoc through biological evolution, cultural evolution, and personal experience. At no point in any of that were they ever designed not to conflict. So maybe I just need to kind of find a way to improve the “average happiness” of my various intuitions. Although if I had to pursue that obviously bogus math analogy any further, I’d say something like the geometric mean would be closer.
I suspect you also can find some intransitive preferences of your own if you go looking, and would find more if you had perfect view of all your preferences and their consequences. And I personally think you’re best off to roll with that. Maybe intransitive preferences open you to being Dutch-booked, but trying to have absolutely transitive preferences is likely to make it even easier to get you go just do something intuitively catastrophic, while telling yourself you have to want it.
You raise lots of good objections there. I think most of them are addressed quite well in the book though. You don’t need any money, because it seems to be online for free: https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf And if you’re short of time it’s probably only the last chapter you need to read. I really disagree with the suggestion that there’s nothing to learn from ethical philosophy books.
For point 1: Yes you can value other things, but even if people’s quality of life is only a part of what you value, the mere-addition paradox raises problems for that part of what you value.
For point 2:That’s not really an objection to the argument.
For point 3: I don’t think the argument depends on the ability to precisely aggregate happiness. The graphs are helpful ways of conveying the idea with pictures, but the ability to quantify a population’s happiness and plot it on a graph is not essential (and obviously impossible in practice, whatever your stance on ethics). For the thought experiment, it’s enough to imagine a large population at roughly the same quality of life, then adding new people at a lower quality of life, then increasing their quality of life by a lot and only slightly lowering the quality of life of the original people, then repeating, etc. The reference to what you are doing to the ‘total’ and ‘average’ as this happens is supposed to be particularly addressed at those people who claim to value the ‘total’, or ‘average’, happiness I think. For the key idea, you can keep things more vague, and the argument still carries force.
For point 4: You can try to value things about the distribution of happiness, as a way out. I remember that’s discussed in the book as well, as are a number of other different approaches you could try to take to population ethics, though I don’t remember the details. Ultimately, I’m not sure what step in the chain of argument that would help you to reject.
On the non-transitive preferences being ok: that’s a fair take, and something like this is ultimately what Parfit himself tried to do I think. He didn’t like the repugnant conclusion, hence why he gave it that name. He didn’t want to just say non-transitive preferences were fine, but he did try to say that certain populations were incomparable, so as to break the chain of the argument. There’s a paper about it here which I haven’t looked at too much but maybe you’d agree with: https://www.stafforini.com/docs/Parfit%20-%20Can%20we%20avoid%20the%20repugnant%20conclusion.pdf
Quickly, ’cuz I’ve been spending too much time here lately...
One. If my other values actively conflict with having more than a certain given number of people, then they may overwhelm the considerations were talking about here and make them irrelevant.
Three. It’s not that you can’t do it precisely. It’s that you’re in a state of sin if you try to aggregate or compare them at all, even in the most loose and qualitative way. I’ll admit that I sometimes commit that sin, but that’s because I don’t buy into the whole idea of rigorous ethical philsophy to begin with. And only in extremis; I don’t think I’d be willing to commit it enough for that argument to really work for me.
Four. I’m not sure what you mean by “distribution of happiness”. That makes it sound like there’s a bottle of happiness and we’re trying to decide who gets to drink how much of it, or how to brew more, or how we can dilute it, or whatever. What I’m getting at is that your happiness and my happiness aren’t the same stuff at all; it’s more like there’s a big heap of random “happinesses”, none of them necessarily related to or substitutable for the others at all. Everybody gets one, but it’s really hard to say who’s getting the better deal. And, all else being equal, I’d rather have them be different from each other than have more identical ones.
I have probably heard those arguments, but the particular formulation you mention appear to be embedded in a book of ethical philosophy, so I can’t check, because I haven’t got a lot of time or money for reading whole ethical philosophy books. I think that’s a mostly doomed approach that nobody should spend too much time on.
I looked at the Wikipedia summary, for whatever that’s worth, and here are my standard responses to what’s in there:
I reject the idea that I only get to assign value to people and their quality of life, and don’t get to care about other aspects of the universe in which they’re embedded and of their effects on it. I am, if you push the scenario hard enough, literally willing to value maintaining a certain amount of VOID, sort of a “void preserve”, if you will, over adding more people. And it gets even hairier if you start asking difficult questions about what counts as a “person” and why. And if you broaden your circle of concern enough, it starts to get hard to explain why you give equal weight to everything inside it.
Even if you do restrict yourself only to people, which again I don’t, step 1, from A to A+, doesn’t exactly assume that you can always add a new group of people without in any way affecting the old ones, but seems to tend to encourage thinking that way, which is not necessarily a win.
Step 2, where “total and average happiness increase” from A+ to B-, is the clearest example of how the whole argument requires aggregating happiness… and it’s not a valid step. You can’t legitimately talk about, let alone compute, “total happiness”, “average happiness”, “maximum happiness”, or indeed ANYTHING that requires you put two or more people’s happiness on the same scale. You may not even be able to do it for one person. At MOST you can impose a very weak partial ordering on states of the universe (I think that’s the sort of thing Pareto talked about, but again I don’t study this stuff...). And such a partial ordering doesn’t help at all when you’re trying to look at populations.
If you could aggregate or compare happiness, the way you did it wouldn’t necessarily be independent of things like how diverse various people’s happiness was; happiness doesn’t have to be a fungible commodity. As I said before, I’d probably rather create two significantly different happy people than a million identical “equally happy” people.
So I don’t accept that argument requires me to accept the repugnant conclusion on pain of having intransitive preferences.
That said, of course I do have some non-transitive preferences, or at least I’m pretty sure I do. I’m human, not some kind of VNM-thing. My preferences are going to depend on when you happen to ask me a question, how you ask it, and what particular consequences seem most salient. Sure, I often prefer to be consistent, and if I explicitly decided on X yesterday I’m not likely to choose Y tomorrow. Especially not if feel like maybe I’ve led somebody to depend on my previous choice. But consistency isn’t always going to control absolutely.
Even if it were possible, getting rid of all non-transitive preferences, or even all revealed non-transitive preferences, would demand deeply rewriting my mind and personality, and I do not at this time wish to do that, or at least not in that way. It’s especially unappealing because every set of presumably transitive preferences that people suggest I adopt seems to leave me preferring one or another kind of intuitively crazy outcome, and I believe that’s probably going to be true of any consistent system.
My intuitions conflict, because they were adopted ad-hoc through biological evolution, cultural evolution, and personal experience. At no point in any of that were they ever designed not to conflict. So maybe I just need to kind of find a way to improve the “average happiness” of my various intuitions. Although if I had to pursue that obviously bogus math analogy any further, I’d say something like the geometric mean would be closer.
I suspect you also can find some intransitive preferences of your own if you go looking, and would find more if you had perfect view of all your preferences and their consequences. And I personally think you’re best off to roll with that. Maybe intransitive preferences open you to being Dutch-booked, but trying to have absolutely transitive preferences is likely to make it even easier to get you go just do something intuitively catastrophic, while telling yourself you have to want it.
You raise lots of good objections there. I think most of them are addressed quite well in the book though. You don’t need any money, because it seems to be online for free: https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf And if you’re short of time it’s probably only the last chapter you need to read. I really disagree with the suggestion that there’s nothing to learn from ethical philosophy books.
For point 1: Yes you can value other things, but even if people’s quality of life is only a part of what you value, the mere-addition paradox raises problems for that part of what you value.
For point 2:That’s not really an objection to the argument.
For point 3: I don’t think the argument depends on the ability to precisely aggregate happiness. The graphs are helpful ways of conveying the idea with pictures, but the ability to quantify a population’s happiness and plot it on a graph is not essential (and obviously impossible in practice, whatever your stance on ethics). For the thought experiment, it’s enough to imagine a large population at roughly the same quality of life, then adding new people at a lower quality of life, then increasing their quality of life by a lot and only slightly lowering the quality of life of the original people, then repeating, etc. The reference to what you are doing to the ‘total’ and ‘average’ as this happens is supposed to be particularly addressed at those people who claim to value the ‘total’, or ‘average’, happiness I think. For the key idea, you can keep things more vague, and the argument still carries force.
For point 4: You can try to value things about the distribution of happiness, as a way out. I remember that’s discussed in the book as well, as are a number of other different approaches you could try to take to population ethics, though I don’t remember the details. Ultimately, I’m not sure what step in the chain of argument that would help you to reject.
On the non-transitive preferences being ok: that’s a fair take, and something like this is ultimately what Parfit himself tried to do I think. He didn’t like the repugnant conclusion, hence why he gave it that name. He didn’t want to just say non-transitive preferences were fine, but he did try to say that certain populations were incomparable, so as to break the chain of the argument. There’s a paper about it here which I haven’t looked at too much but maybe you’d agree with: https://www.stafforini.com/docs/Parfit%20-%20Can%20we%20avoid%20the%20repugnant%20conclusion.pdf
Quickly, ’cuz I’ve been spending too much time here lately...
One. If my other values actively conflict with having more than a certain given number of people, then they may overwhelm the considerations were talking about here and make them irrelevant.
Three. It’s not that you can’t do it precisely. It’s that you’re in a state of sin if you try to aggregate or compare them at all, even in the most loose and qualitative way. I’ll admit that I sometimes commit that sin, but that’s because I don’t buy into the whole idea of rigorous ethical philsophy to begin with. And only in extremis; I don’t think I’d be willing to commit it enough for that argument to really work for me.
Four. I’m not sure what you mean by “distribution of happiness”. That makes it sound like there’s a bottle of happiness and we’re trying to decide who gets to drink how much of it, or how to brew more, or how we can dilute it, or whatever. What I’m getting at is that your happiness and my happiness aren’t the same stuff at all; it’s more like there’s a big heap of random “happinesses”, none of them necessarily related to or substitutable for the others at all. Everybody gets one, but it’s really hard to say who’s getting the better deal. And, all else being equal, I’d rather have them be different from each other than have more identical ones.