I don’t care whether future generations get born or not. I only care whether people who actually are born do OK. If anything, I find it creepy when Bostrom or whoever talks about a Universe absolutely crawling with “future generations”, and how much critical it supposedly is to create as many as possible. It always sounds like a hive or a bacterial colony or something.
It’s all the less interesting because a lot of people who share that vision seem to have really restricted ideas of who or what should count as a “future generation”. Why are humans the important class, either as a reference class or as a class of beings with value? And who’s in the “human club” anyway?
Seems to me that the biggest problem with an apocalypse isn’t that a bunch of people never get born; it’s that a bunch of living people get apocalypsticized. Humans are one thing, but why should I care about “humanity”?
Thanks for the comment! That’s definitely an important philosophical problem that I very much glossed over in the concluding section.
It’s sort of orthogonal to the main point of the post, but I will briefly say this: 10 years ago I would have agreed with your point of view completely. I believed in the slogan you sometimes hear people say: “we’re in favour of making people happy, and neutral about making happy people.” But now I don’t agree with this. The main thing that changed my mind was reading Reasons+Persons, and in particular the “mere-addition paradox”. That’s convinced me that if you try to be neutral on making new happy people, then you end up with non-transitive preferences, and that seems worse to me than just accepting that maybe I do care about making happy people after all.
Maybe you’re already well aware of these arguments and haven’t been convinced, which is fair enough (would be interested to hear more about why), but thought I would share in case you’re not.
I have probably heard those arguments, but the particular formulation you mention appear to be embedded in a book of ethical philosophy, so I can’t check, because I haven’t got a lot of time or money for reading whole ethical philosophy books. I think that’s a mostly doomed approach that nobody should spend too much time on.
I looked at the Wikipedia summary, for whatever that’s worth, and here are my standard responses to what’s in there:
I reject the idea that I only get to assign value to people and their quality of life, and don’t get to care about other aspects of the universe in which they’re embedded and of their effects on it. I am, if you push the scenario hard enough, literally willing to value maintaining a certain amount of VOID, sort of a “void preserve”, if you will, over adding more people. And it gets even hairier if you start asking difficult questions about what counts as a “person” and why. And if you broaden your circle of concern enough, it starts to get hard to explain why you give equal weight to everything inside it.
Even if you do restrict yourself only to people, which again I don’t, step 1, from A to A+, doesn’t exactly assume that you can always add a new group of people without in any way affecting the old ones, but seems to tend to encourage thinking that way, which is not necessarily a win.
Step 2, where “total and average happiness increase” from A+ to B-, is the clearest example of how the whole argument requires aggregating happiness… and it’s not a valid step. You can’t legitimately talk about, let alone compute, “total happiness”, “average happiness”, “maximum happiness”, or indeed ANYTHING that requires you put two or more people’s happiness on the same scale. You may not even be able to do it for one person. At MOST you can impose a very weak partial ordering on states of the universe (I think that’s the sort of thing Pareto talked about, but again I don’t study this stuff...). And such a partial ordering doesn’t help at all when you’re trying to look at populations.
If you could aggregate or compare happiness, the way you did it wouldn’t necessarily be independent of things like how diverse various people’s happiness was; happiness doesn’t have to be a fungible commodity. As I said before, I’d probably rather create two significantly different happy people than a million identical “equally happy” people.
So I don’t accept that argument requires me to accept the repugnant conclusion on pain of having intransitive preferences.
That said, of course I do have some non-transitive preferences, or at least I’m pretty sure I do. I’m human, not some kind of VNM-thing. My preferences are going to depend on when you happen to ask me a question, how you ask it, and what particular consequences seem most salient. Sure, I often prefer to be consistent, and if I explicitly decided on X yesterday I’m not likely to choose Y tomorrow. Especially not if feel like maybe I’ve led somebody to depend on my previous choice. But consistency isn’t always going to control absolutely.
Even if it were possible, getting rid of all non-transitive preferences, or even all revealed non-transitive preferences, would demand deeply rewriting my mind and personality, and I do not at this time wish to do that, or at least not in that way. It’s especially unappealing because every set of presumably transitive preferences that people suggest I adopt seems to leave me preferring one or another kind of intuitively crazy outcome, and I believe that’s probably going to be true of any consistent system.
My intuitions conflict, because they were adopted ad-hoc through biological evolution, cultural evolution, and personal experience. At no point in any of that were they ever designed not to conflict. So maybe I just need to kind of find a way to improve the “average happiness” of my various intuitions. Although if I had to pursue that obviously bogus math analogy any further, I’d say something like the geometric mean would be closer.
I suspect you also can find some intransitive preferences of your own if you go looking, and would find more if you had perfect view of all your preferences and their consequences. And I personally think you’re best off to roll with that. Maybe intransitive preferences open you to being Dutch-booked, but trying to have absolutely transitive preferences is likely to make it even easier to get you go just do something intuitively catastrophic, while telling yourself you have to want it.
You raise lots of good objections there. I think most of them are addressed quite well in the book though. You don’t need any money, because it seems to be online for free: https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf And if you’re short of time it’s probably only the last chapter you need to read. I really disagree with the suggestion that there’s nothing to learn from ethical philosophy books.
For point 1: Yes you can value other things, but even if people’s quality of life is only a part of what you value, the mere-addition paradox raises problems for that part of what you value.
For point 2:That’s not really an objection to the argument.
For point 3: I don’t think the argument depends on the ability to precisely aggregate happiness. The graphs are helpful ways of conveying the idea with pictures, but the ability to quantify a population’s happiness and plot it on a graph is not essential (and obviously impossible in practice, whatever your stance on ethics). For the thought experiment, it’s enough to imagine a large population at roughly the same quality of life, then adding new people at a lower quality of life, then increasing their quality of life by a lot and only slightly lowering the quality of life of the original people, then repeating, etc. The reference to what you are doing to the ‘total’ and ‘average’ as this happens is supposed to be particularly addressed at those people who claim to value the ‘total’, or ‘average’, happiness I think. For the key idea, you can keep things more vague, and the argument still carries force.
For point 4: You can try to value things about the distribution of happiness, as a way out. I remember that’s discussed in the book as well, as are a number of other different approaches you could try to take to population ethics, though I don’t remember the details. Ultimately, I’m not sure what step in the chain of argument that would help you to reject.
On the non-transitive preferences being ok: that’s a fair take, and something like this is ultimately what Parfit himself tried to do I think. He didn’t like the repugnant conclusion, hence why he gave it that name. He didn’t want to just say non-transitive preferences were fine, but he did try to say that certain populations were incomparable, so as to break the chain of the argument. There’s a paper about it here which I haven’t looked at too much but maybe you’d agree with: https://www.stafforini.com/docs/Parfit%20-%20Can%20we%20avoid%20the%20repugnant%20conclusion.pdf
Quickly, ’cuz I’ve been spending too much time here lately...
One. If my other values actively conflict with having more than a certain given number of people, then they may overwhelm the considerations were talking about here and make them irrelevant.
Three. It’s not that you can’t do it precisely. It’s that you’re in a state of sin if you try to aggregate or compare them at all, even in the most loose and qualitative way. I’ll admit that I sometimes commit that sin, but that’s because I don’t buy into the whole idea of rigorous ethical philsophy to begin with. And only in extremis; I don’t think I’d be willing to commit it enough for that argument to really work for me.
Four. I’m not sure what you mean by “distribution of happiness”. That makes it sound like there’s a bottle of happiness and we’re trying to decide who gets to drink how much of it, or how to brew more, or how we can dilute it, or whatever. What I’m getting at is that your happiness and my happiness aren’t the same stuff at all; it’s more like there’s a big heap of random “happinesses”, none of them necessarily related to or substitutable for the others at all. Everybody gets one, but it’s really hard to say who’s getting the better deal. And, all else being equal, I’d rather have them be different from each other than have more identical ones.
If you have the chance to create lives that are worth-living at low-cost while you know that you are not going to increase suffering in any unbearable amount, why wouldn’t you? Those people would also say that they would prefer to have lived than to not have lived, just like you presumably.
It’s like a modal trolley problem: what if you can choose the future of the universe between say, 0 lives or a trillion lives worthy-living? You are going to cause one future or another with your actions, there’s no point in saying that one is the ‘default’ that you’ll choose if both are possible depending on your actions (unless you consider to not do anything in the trolley problem as the default option, and the one you would choose).
If you consider that no amount of pleasure is better than any other amount (if the alternative is to not feel anything), then 0 lives is fine because pleasure has no inherent positive value compare to not living (if pleasure doesn’t add up, 0 is just as much as a billion), and there’s no suffering (negative value), so extinction is actually better than no extinction (at least if the extinction is fast enough). If you consider that pleasure has inherent positive value (more pleasure implies more positive value, same thing), why stopping at a fixed number of pleasure when you can create more pleasure by adding more worth-living lives? It’s more arbitrary.
If you consider that something has positive value, that typically implies that an universe with more of that thing is better. If you consider that preserving species has positive value then, ceteris paribus, an universe with more species preserved is a better universe. It’s the same thing.
If you have the chance to create lives that are worth-living at low-cost while you know that you are not going to increase suffering in any unbearable amount, why wouldn’t you?
Well, I suppose I would, especially if it meant going from no lives lived at all to some reasonable number of lives lived. “I don’t care” is unduly glib. I don’t care enough to do it if it had a major cost to me, definitely not given the number of lives already around.
I guess I’d be more likely to care somewhat more if those lives were diverse. Creating a million exactly identical lives seems less cool than creating just two significantly different ones. And the difference between a billion and a trillion is pretty unmoving to me, probably because I doubt the diversity of experiences among the trillion.
So long as I take reasonable care not to actively actualize a lot of people who are horribly unhappy on net, manipulating the number of future people doesn’t seem like some kind of moral imperative to me, more like an aesthetic preference to sculpt the future.
I’m definitely not responsible for people I don’t create, no matter what. I am responsible for any people I do create, but that responsibility is more in the nature of “not obviously screwing them over and throwing them into predictable hellscapes” than being absolutely sure they’ll all have fantastic lives.
I would actively resist packing the whole Universe with humans at the maximum just-barely-better-than-not-living density, because it’s just plain outright ugly. And I can’t even imagine how I could figure out an “optimal” density from the point of view of the experiences of the people involved, even if I were invested in nonexistent people.
Those people would also say that they would prefer to have lived than to not have lived, just like you presumably.
I don’t feel like I can even formulate a preference between those choices. I don’t just mean that one is as good as the other. I mean that the whole question seems pointless and kind of doesn’t compute. I recognize that it does make some kind of sense in some way, but how am I supposed to form a preference about the past, especially when my preferences, or lack thereof, would be modified by the hypothetical-but-strictly-impossible enactment of that preference? What am I supposed to do with that kind of preference if I have it?
Anyway, if a given person doesn’t exist, in the strongest possible sense of nonexistence, where they don’t appear anywhere in the timeline, then that person doesn’t in fact have any preferences at all, regardless of what they “would” say in some hypothetical sense. You have to exist to prefer something. The nonexistent preferences of nonexistent people are, well… not exactly compelling?
I mean, if you want to go down that road, no matter what I do, I can only instantiate a finite number of people. If I don’t discount in some very harsh way for lack of diversity, that leaves an infinite number of people nonexistent. If I continue on the path of taking nonexistent people’s preferences into account; and I discover that even a “tiny” majority of those infinite nonexistent people “would” feel envy and spite for the people who do exist, and would want them not to exist; then should I take that infinite amount of preference into account, and make sure not to create anybody at all? Or should I maybe even just not create anybody at all out of simple fairness?
I think I have more than enough trouble taking even minimal care of even all the people who definitely do exist.
If you consider that pleasure has inherent positive value (more pleasure implies more positive value, same thing), why stopping at a fixed number of pleasure when you can create more pleasure by adding more worth-living lives? It’s more arbitrary.
At a certain point the whole thing stops being interesting. And at a certain point after that, it just seems like a weird obsession. Especially if you’re giving up on other things. If you’ve populated all the galaxies but one, that last empty galaxy seems more valuable to me than adding however many people you can fit into it.
Also, what’s so great abotut humans specifically? If I wanted to maximize pleasure, shouldn’t I try to create a bunch of utility monsters that only feel pleasure, instead of wasting resources on humans whose pleasure is imperfect? If you want, it can be utility monsters with two capacities: to feel pleasure, and to in whatever sense you like prefer their own existence to their nonexistence. And if I do have to create humans, should I try to make them as close to those utility monsters as possible while still meeting the minimum definition of “human”?
If you consider that something has positive value, that typically implies that an universe with more of that thing is better.
I like cake. I don’t necessarily want to stuff the whole universe with cake (or paperclips). I can’t necessarily say exactly how much cake I want to have around, but it’s not “as much as possible”. Even if I can identify an optimal amount of anything to have, the optimum does not have to be the maximum.
… and, pattern matching on previous conversations and guessing where this one might go, I think that formalized ethical systems, where you try to derive what you “should” do using logical inference from some fixed set of principles, are pointless and often dangerous. That includes all the of “measure and maximize pleasure/utility” variants, especially if they require you to aggregate people’s utilities into a common metric.
There’s no a priori reason you should expect to be able to pull anything logically consistent out of a bunch of ad-hoc, evolved ethical intuitions, and experience suggests that you can’t do that. Everybody who tries seems to come up with something that has implications those same intuitions say are grossly monstrous. And in fact when somebody gets power and tries to really enact some rigid formalized system, the actual consequences tend to be monstrous.
“Humanclipping” the universe has that kind of feel for me.
At a certain point the whole thing stops being interesting. And at a certain point after that, it just seems like a weird obsession. Especially if you’re giving up on other things. If you’ve populated all the galaxies but one, that last empty galaxy seems more valuable to me than adding however many people you can fit into it.
I mean, pleasure[1] is a terminal value for most of us because we like it (and suffering because we dislike it), not ‘lifeless’ matter. I prefer to have animals existing than to have zero animals, if at least we can make sure that they typically enjoy themselves or it will lead to a state-of-affairs in which most enjoy themselves most of the time. This is the same for humans in specific.
Also, what’s so great about humans specifically?
I didn’t use the word ‘humans’ for a reason.
Everybody who tries seems to come up with something that has implications those same intuitions say are grossly monstrous. And in fact when somebody gets power and tries to really enact some rigid formalized system, the actual consequences tend to be monstrous.
The reason we can say that “experience suggests that you can’t do that” is because we have some standard to judge it. We need a specific reason to say that is ‘monstrous’, just like you’ll give reasons for why any action is monstrous. In principle, no one needs to be wronged[2]. We can assume a deontological commitment to not kill any life or damage anyone if that’s what bothers you. Sure, you can say that we are arbitrarily weakening our consequentialist commitment, but I haven’t said at any point that it had to be ‘at all costs’ regardless (I know that I was commenting within the context of the article, but I’m speaking personally and I haven’t even read most of it).
[1] It doesn’t need to be literal (‘naive’) ‘pleasure’ with nothing else the thing that we optimise for.
[2]This is a hypothetical for a post-scarcity society, when you definitely have resources to spare and no one needs to be compromised to get a life into the world.
That was a nice clear explanation. Thank you.
… but you still haven’t sold me on it mattering.
I don’t care whether future generations get born or not. I only care whether people who actually are born do OK. If anything, I find it creepy when Bostrom or whoever talks about a Universe absolutely crawling with “future generations”, and how much critical it supposedly is to create as many as possible. It always sounds like a hive or a bacterial colony or something.
It’s all the less interesting because a lot of people who share that vision seem to have really restricted ideas of who or what should count as a “future generation”. Why are humans the important class, either as a reference class or as a class of beings with value? And who’s in the “human club” anyway?
Seems to me that the biggest problem with an apocalypse isn’t that a bunch of people never get born; it’s that a bunch of living people get apocalypsticized. Humans are one thing, but why should I care about “humanity”?
Thanks for the comment! That’s definitely an important philosophical problem that I very much glossed over in the concluding section.
It’s sort of orthogonal to the main point of the post, but I will briefly say this: 10 years ago I would have agreed with your point of view completely. I believed in the slogan you sometimes hear people say: “we’re in favour of making people happy, and neutral about making happy people.” But now I don’t agree with this. The main thing that changed my mind was reading Reasons+Persons, and in particular the “mere-addition paradox”. That’s convinced me that if you try to be neutral on making new happy people, then you end up with non-transitive preferences, and that seems worse to me than just accepting that maybe I do care about making happy people after all.
Maybe you’re already well aware of these arguments and haven’t been convinced, which is fair enough (would be interested to hear more about why), but thought I would share in case you’re not.
I have probably heard those arguments, but the particular formulation you mention appear to be embedded in a book of ethical philosophy, so I can’t check, because I haven’t got a lot of time or money for reading whole ethical philosophy books. I think that’s a mostly doomed approach that nobody should spend too much time on.
I looked at the Wikipedia summary, for whatever that’s worth, and here are my standard responses to what’s in there:
I reject the idea that I only get to assign value to people and their quality of life, and don’t get to care about other aspects of the universe in which they’re embedded and of their effects on it. I am, if you push the scenario hard enough, literally willing to value maintaining a certain amount of VOID, sort of a “void preserve”, if you will, over adding more people. And it gets even hairier if you start asking difficult questions about what counts as a “person” and why. And if you broaden your circle of concern enough, it starts to get hard to explain why you give equal weight to everything inside it.
Even if you do restrict yourself only to people, which again I don’t, step 1, from A to A+, doesn’t exactly assume that you can always add a new group of people without in any way affecting the old ones, but seems to tend to encourage thinking that way, which is not necessarily a win.
Step 2, where “total and average happiness increase” from A+ to B-, is the clearest example of how the whole argument requires aggregating happiness… and it’s not a valid step. You can’t legitimately talk about, let alone compute, “total happiness”, “average happiness”, “maximum happiness”, or indeed ANYTHING that requires you put two or more people’s happiness on the same scale. You may not even be able to do it for one person. At MOST you can impose a very weak partial ordering on states of the universe (I think that’s the sort of thing Pareto talked about, but again I don’t study this stuff...). And such a partial ordering doesn’t help at all when you’re trying to look at populations.
If you could aggregate or compare happiness, the way you did it wouldn’t necessarily be independent of things like how diverse various people’s happiness was; happiness doesn’t have to be a fungible commodity. As I said before, I’d probably rather create two significantly different happy people than a million identical “equally happy” people.
So I don’t accept that argument requires me to accept the repugnant conclusion on pain of having intransitive preferences.
That said, of course I do have some non-transitive preferences, or at least I’m pretty sure I do. I’m human, not some kind of VNM-thing. My preferences are going to depend on when you happen to ask me a question, how you ask it, and what particular consequences seem most salient. Sure, I often prefer to be consistent, and if I explicitly decided on X yesterday I’m not likely to choose Y tomorrow. Especially not if feel like maybe I’ve led somebody to depend on my previous choice. But consistency isn’t always going to control absolutely.
Even if it were possible, getting rid of all non-transitive preferences, or even all revealed non-transitive preferences, would demand deeply rewriting my mind and personality, and I do not at this time wish to do that, or at least not in that way. It’s especially unappealing because every set of presumably transitive preferences that people suggest I adopt seems to leave me preferring one or another kind of intuitively crazy outcome, and I believe that’s probably going to be true of any consistent system.
My intuitions conflict, because they were adopted ad-hoc through biological evolution, cultural evolution, and personal experience. At no point in any of that were they ever designed not to conflict. So maybe I just need to kind of find a way to improve the “average happiness” of my various intuitions. Although if I had to pursue that obviously bogus math analogy any further, I’d say something like the geometric mean would be closer.
I suspect you also can find some intransitive preferences of your own if you go looking, and would find more if you had perfect view of all your preferences and their consequences. And I personally think you’re best off to roll with that. Maybe intransitive preferences open you to being Dutch-booked, but trying to have absolutely transitive preferences is likely to make it even easier to get you go just do something intuitively catastrophic, while telling yourself you have to want it.
You raise lots of good objections there. I think most of them are addressed quite well in the book though. You don’t need any money, because it seems to be online for free: https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf And if you’re short of time it’s probably only the last chapter you need to read. I really disagree with the suggestion that there’s nothing to learn from ethical philosophy books.
For point 1: Yes you can value other things, but even if people’s quality of life is only a part of what you value, the mere-addition paradox raises problems for that part of what you value.
For point 2:That’s not really an objection to the argument.
For point 3: I don’t think the argument depends on the ability to precisely aggregate happiness. The graphs are helpful ways of conveying the idea with pictures, but the ability to quantify a population’s happiness and plot it on a graph is not essential (and obviously impossible in practice, whatever your stance on ethics). For the thought experiment, it’s enough to imagine a large population at roughly the same quality of life, then adding new people at a lower quality of life, then increasing their quality of life by a lot and only slightly lowering the quality of life of the original people, then repeating, etc. The reference to what you are doing to the ‘total’ and ‘average’ as this happens is supposed to be particularly addressed at those people who claim to value the ‘total’, or ‘average’, happiness I think. For the key idea, you can keep things more vague, and the argument still carries force.
For point 4: You can try to value things about the distribution of happiness, as a way out. I remember that’s discussed in the book as well, as are a number of other different approaches you could try to take to population ethics, though I don’t remember the details. Ultimately, I’m not sure what step in the chain of argument that would help you to reject.
On the non-transitive preferences being ok: that’s a fair take, and something like this is ultimately what Parfit himself tried to do I think. He didn’t like the repugnant conclusion, hence why he gave it that name. He didn’t want to just say non-transitive preferences were fine, but he did try to say that certain populations were incomparable, so as to break the chain of the argument. There’s a paper about it here which I haven’t looked at too much but maybe you’d agree with: https://www.stafforini.com/docs/Parfit%20-%20Can%20we%20avoid%20the%20repugnant%20conclusion.pdf
Quickly, ’cuz I’ve been spending too much time here lately...
One. If my other values actively conflict with having more than a certain given number of people, then they may overwhelm the considerations were talking about here and make them irrelevant.
Three. It’s not that you can’t do it precisely. It’s that you’re in a state of sin if you try to aggregate or compare them at all, even in the most loose and qualitative way. I’ll admit that I sometimes commit that sin, but that’s because I don’t buy into the whole idea of rigorous ethical philsophy to begin with. And only in extremis; I don’t think I’d be willing to commit it enough for that argument to really work for me.
Four. I’m not sure what you mean by “distribution of happiness”. That makes it sound like there’s a bottle of happiness and we’re trying to decide who gets to drink how much of it, or how to brew more, or how we can dilute it, or whatever. What I’m getting at is that your happiness and my happiness aren’t the same stuff at all; it’s more like there’s a big heap of random “happinesses”, none of them necessarily related to or substitutable for the others at all. Everybody gets one, but it’s really hard to say who’s getting the better deal. And, all else being equal, I’d rather have them be different from each other than have more identical ones.
If you have the chance to create lives that are worth-living at low-cost while you know that you are not going to increase suffering in any unbearable amount, why wouldn’t you? Those people would also say that they would prefer to have lived than to not have lived, just like you presumably.
It’s like a modal trolley problem: what if you can choose the future of the universe between say, 0 lives or a trillion lives worthy-living? You are going to cause one future or another with your actions, there’s no point in saying that one is the ‘default’ that you’ll choose if both are possible depending on your actions (unless you consider to not do anything in the trolley problem as the default option, and the one you would choose).
If you consider that no amount of pleasure is better than any other amount (if the alternative is to not feel anything), then 0 lives is fine because pleasure has no inherent positive value compare to not living (if pleasure doesn’t add up, 0 is just as much as a billion), and there’s no suffering (negative value), so extinction is actually better than no extinction (at least if the extinction is fast enough). If you consider that pleasure has inherent positive value (more pleasure implies more positive value, same thing), why stopping at a fixed number of pleasure when you can create more pleasure by adding more worth-living lives? It’s more arbitrary.
If you consider that something has positive value, that typically implies that an universe with more of that thing is better. If you consider that preserving species has positive value then, ceteris paribus, an universe with more species preserved is a better universe. It’s the same thing.
Well, I suppose I would, especially if it meant going from no lives lived at all to some reasonable number of lives lived. “I don’t care” is unduly glib. I don’t care enough to do it if it had a major cost to me, definitely not given the number of lives already around.
I guess I’d be more likely to care somewhat more if those lives were diverse. Creating a million exactly identical lives seems less cool than creating just two significantly different ones. And the difference between a billion and a trillion is pretty unmoving to me, probably because I doubt the diversity of experiences among the trillion.
So long as I take reasonable care not to actively actualize a lot of people who are horribly unhappy on net, manipulating the number of future people doesn’t seem like some kind of moral imperative to me, more like an aesthetic preference to sculpt the future.
I’m definitely not responsible for people I don’t create, no matter what. I am responsible for any people I do create, but that responsibility is more in the nature of “not obviously screwing them over and throwing them into predictable hellscapes” than being absolutely sure they’ll all have fantastic lives.
I would actively resist packing the whole Universe with humans at the maximum just-barely-better-than-not-living density, because it’s just plain outright ugly. And I can’t even imagine how I could figure out an “optimal” density from the point of view of the experiences of the people involved, even if I were invested in nonexistent people.
I don’t feel like I can even formulate a preference between those choices. I don’t just mean that one is as good as the other. I mean that the whole question seems pointless and kind of doesn’t compute. I recognize that it does make some kind of sense in some way, but how am I supposed to form a preference about the past, especially when my preferences, or lack thereof, would be modified by the hypothetical-but-strictly-impossible enactment of that preference? What am I supposed to do with that kind of preference if I have it?
Anyway, if a given person doesn’t exist, in the strongest possible sense of nonexistence, where they don’t appear anywhere in the timeline, then that person doesn’t in fact have any preferences at all, regardless of what they “would” say in some hypothetical sense. You have to exist to prefer something. The nonexistent preferences of nonexistent people are, well… not exactly compelling?
I mean, if you want to go down that road, no matter what I do, I can only instantiate a finite number of people. If I don’t discount in some very harsh way for lack of diversity, that leaves an infinite number of people nonexistent. If I continue on the path of taking nonexistent people’s preferences into account; and I discover that even a “tiny” majority of those infinite nonexistent people “would” feel envy and spite for the people who do exist, and would want them not to exist; then should I take that infinite amount of preference into account, and make sure not to create anybody at all? Or should I maybe even just not create anybody at all out of simple fairness?
I think I have more than enough trouble taking even minimal care of even all the people who definitely do exist.
At a certain point the whole thing stops being interesting. And at a certain point after that, it just seems like a weird obsession. Especially if you’re giving up on other things. If you’ve populated all the galaxies but one, that last empty galaxy seems more valuable to me than adding however many people you can fit into it.
Also, what’s so great abotut humans specifically? If I wanted to maximize pleasure, shouldn’t I try to create a bunch of utility monsters that only feel pleasure, instead of wasting resources on humans whose pleasure is imperfect? If you want, it can be utility monsters with two capacities: to feel pleasure, and to in whatever sense you like prefer their own existence to their nonexistence. And if I do have to create humans, should I try to make them as close to those utility monsters as possible while still meeting the minimum definition of “human”?
I like cake. I don’t necessarily want to stuff the whole universe with cake (or paperclips). I can’t necessarily say exactly how much cake I want to have around, but it’s not “as much as possible”. Even if I can identify an optimal amount of anything to have, the optimum does not have to be the maximum.
… and, pattern matching on previous conversations and guessing where this one might go, I think that formalized ethical systems, where you try to derive what you “should” do using logical inference from some fixed set of principles, are pointless and often dangerous. That includes all the of “measure and maximize pleasure/utility” variants, especially if they require you to aggregate people’s utilities into a common metric.
There’s no a priori reason you should expect to be able to pull anything logically consistent out of a bunch of ad-hoc, evolved ethical intuitions, and experience suggests that you can’t do that. Everybody who tries seems to come up with something that has implications those same intuitions say are grossly monstrous. And in fact when somebody gets power and tries to really enact some rigid formalized system, the actual consequences tend to be monstrous.
“Humanclipping” the universe has that kind of feel for me.
I mean, pleasure[1] is a terminal value for most of us because we like it (and suffering because we dislike it), not ‘lifeless’ matter. I prefer to have animals existing than to have zero animals, if at least we can make sure that they typically enjoy themselves or it will lead to a state-of-affairs in which most enjoy themselves most of the time. This is the same for humans in specific.
I didn’t use the word ‘humans’ for a reason.
The reason we can say that “experience suggests that you can’t do that” is because we have some standard to judge it. We need a specific reason to say that is ‘monstrous’, just like you’ll give reasons for why any action is monstrous. In principle, no one needs to be wronged[2]. We can assume a deontological commitment to not kill any life or damage anyone if that’s what bothers you. Sure, you can say that we are arbitrarily weakening our consequentialist commitment, but I haven’t said at any point that it had to be ‘at all costs’ regardless (I know that I was commenting within the context of the article, but I’m speaking personally and I haven’t even read most of it).
[1] It doesn’t need to be literal (‘naive’) ‘pleasure’ with nothing else the thing that we optimise for.
[2]This is a hypothetical for a post-scarcity society, when you definitely have resources to spare and no one needs to be compromised to get a life into the world.