If you have the chance to create lives that are worth-living at low-cost while you know that you are not going to increase suffering in any unbearable amount, why wouldn’t you? Those people would also say that they would prefer to have lived than to not have lived, just like you presumably.
It’s like a modal trolley problem: what if you can choose the future of the universe between say, 0 lives or a trillion lives worthy-living? You are going to cause one future or another with your actions, there’s no point in saying that one is the ‘default’ that you’ll choose if both are possible depending on your actions (unless you consider to not do anything in the trolley problem as the default option, and the one you would choose).
If you consider that no amount of pleasure is better than any other amount (if the alternative is to not feel anything), then 0 lives is fine because pleasure has no inherent positive value compare to not living (if pleasure doesn’t add up, 0 is just as much as a billion), and there’s no suffering (negative value), so extinction is actually better than no extinction (at least if the extinction is fast enough). If you consider that pleasure has inherent positive value (more pleasure implies more positive value, same thing), why stopping at a fixed number of pleasure when you can create more pleasure by adding more worth-living lives? It’s more arbitrary.
If you consider that something has positive value, that typically implies that an universe with more of that thing is better. If you consider that preserving species has positive value then, ceteris paribus, an universe with more species preserved is a better universe. It’s the same thing.
If you have the chance to create lives that are worth-living at low-cost while you know that you are not going to increase suffering in any unbearable amount, why wouldn’t you?
Well, I suppose I would, especially if it meant going from no lives lived at all to some reasonable number of lives lived. “I don’t care” is unduly glib. I don’t care enough to do it if it had a major cost to me, definitely not given the number of lives already around.
I guess I’d be more likely to care somewhat more if those lives were diverse. Creating a million exactly identical lives seems less cool than creating just two significantly different ones. And the difference between a billion and a trillion is pretty unmoving to me, probably because I doubt the diversity of experiences among the trillion.
So long as I take reasonable care not to actively actualize a lot of people who are horribly unhappy on net, manipulating the number of future people doesn’t seem like some kind of moral imperative to me, more like an aesthetic preference to sculpt the future.
I’m definitely not responsible for people I don’t create, no matter what. I am responsible for any people I do create, but that responsibility is more in the nature of “not obviously screwing them over and throwing them into predictable hellscapes” than being absolutely sure they’ll all have fantastic lives.
I would actively resist packing the whole Universe with humans at the maximum just-barely-better-than-not-living density, because it’s just plain outright ugly. And I can’t even imagine how I could figure out an “optimal” density from the point of view of the experiences of the people involved, even if I were invested in nonexistent people.
Those people would also say that they would prefer to have lived than to not have lived, just like you presumably.
I don’t feel like I can even formulate a preference between those choices. I don’t just mean that one is as good as the other. I mean that the whole question seems pointless and kind of doesn’t compute. I recognize that it does make some kind of sense in some way, but how am I supposed to form a preference about the past, especially when my preferences, or lack thereof, would be modified by the hypothetical-but-strictly-impossible enactment of that preference? What am I supposed to do with that kind of preference if I have it?
Anyway, if a given person doesn’t exist, in the strongest possible sense of nonexistence, where they don’t appear anywhere in the timeline, then that person doesn’t in fact have any preferences at all, regardless of what they “would” say in some hypothetical sense. You have to exist to prefer something. The nonexistent preferences of nonexistent people are, well… not exactly compelling?
I mean, if you want to go down that road, no matter what I do, I can only instantiate a finite number of people. If I don’t discount in some very harsh way for lack of diversity, that leaves an infinite number of people nonexistent. If I continue on the path of taking nonexistent people’s preferences into account; and I discover that even a “tiny” majority of those infinite nonexistent people “would” feel envy and spite for the people who do exist, and would want them not to exist; then should I take that infinite amount of preference into account, and make sure not to create anybody at all? Or should I maybe even just not create anybody at all out of simple fairness?
I think I have more than enough trouble taking even minimal care of even all the people who definitely do exist.
If you consider that pleasure has inherent positive value (more pleasure implies more positive value, same thing), why stopping at a fixed number of pleasure when you can create more pleasure by adding more worth-living lives? It’s more arbitrary.
At a certain point the whole thing stops being interesting. And at a certain point after that, it just seems like a weird obsession. Especially if you’re giving up on other things. If you’ve populated all the galaxies but one, that last empty galaxy seems more valuable to me than adding however many people you can fit into it.
Also, what’s so great abotut humans specifically? If I wanted to maximize pleasure, shouldn’t I try to create a bunch of utility monsters that only feel pleasure, instead of wasting resources on humans whose pleasure is imperfect? If you want, it can be utility monsters with two capacities: to feel pleasure, and to in whatever sense you like prefer their own existence to their nonexistence. And if I do have to create humans, should I try to make them as close to those utility monsters as possible while still meeting the minimum definition of “human”?
If you consider that something has positive value, that typically implies that an universe with more of that thing is better.
I like cake. I don’t necessarily want to stuff the whole universe with cake (or paperclips). I can’t necessarily say exactly how much cake I want to have around, but it’s not “as much as possible”. Even if I can identify an optimal amount of anything to have, the optimum does not have to be the maximum.
… and, pattern matching on previous conversations and guessing where this one might go, I think that formalized ethical systems, where you try to derive what you “should” do using logical inference from some fixed set of principles, are pointless and often dangerous. That includes all the of “measure and maximize pleasure/utility” variants, especially if they require you to aggregate people’s utilities into a common metric.
There’s no a priori reason you should expect to be able to pull anything logically consistent out of a bunch of ad-hoc, evolved ethical intuitions, and experience suggests that you can’t do that. Everybody who tries seems to come up with something that has implications those same intuitions say are grossly monstrous. And in fact when somebody gets power and tries to really enact some rigid formalized system, the actual consequences tend to be monstrous.
“Humanclipping” the universe has that kind of feel for me.
At a certain point the whole thing stops being interesting. And at a certain point after that, it just seems like a weird obsession. Especially if you’re giving up on other things. If you’ve populated all the galaxies but one, that last empty galaxy seems more valuable to me than adding however many people you can fit into it.
I mean, pleasure[1] is a terminal value for most of us because we like it (and suffering because we dislike it), not ‘lifeless’ matter. I prefer to have animals existing than to have zero animals, if at least we can make sure that they typically enjoy themselves or it will lead to a state-of-affairs in which most enjoy themselves most of the time. This is the same for humans in specific.
Also, what’s so great about humans specifically?
I didn’t use the word ‘humans’ for a reason.
Everybody who tries seems to come up with something that has implications those same intuitions say are grossly monstrous. And in fact when somebody gets power and tries to really enact some rigid formalized system, the actual consequences tend to be monstrous.
The reason we can say that “experience suggests that you can’t do that” is because we have some standard to judge it. We need a specific reason to say that is ‘monstrous’, just like you’ll give reasons for why any action is monstrous. In principle, no one needs to be wronged[2]. We can assume a deontological commitment to not kill any life or damage anyone if that’s what bothers you. Sure, you can say that we are arbitrarily weakening our consequentialist commitment, but I haven’t said at any point that it had to be ‘at all costs’ regardless (I know that I was commenting within the context of the article, but I’m speaking personally and I haven’t even read most of it).
[1] It doesn’t need to be literal (‘naive’) ‘pleasure’ with nothing else the thing that we optimise for.
[2]This is a hypothetical for a post-scarcity society, when you definitely have resources to spare and no one needs to be compromised to get a life into the world.
If you have the chance to create lives that are worth-living at low-cost while you know that you are not going to increase suffering in any unbearable amount, why wouldn’t you? Those people would also say that they would prefer to have lived than to not have lived, just like you presumably.
It’s like a modal trolley problem: what if you can choose the future of the universe between say, 0 lives or a trillion lives worthy-living? You are going to cause one future or another with your actions, there’s no point in saying that one is the ‘default’ that you’ll choose if both are possible depending on your actions (unless you consider to not do anything in the trolley problem as the default option, and the one you would choose).
If you consider that no amount of pleasure is better than any other amount (if the alternative is to not feel anything), then 0 lives is fine because pleasure has no inherent positive value compare to not living (if pleasure doesn’t add up, 0 is just as much as a billion), and there’s no suffering (negative value), so extinction is actually better than no extinction (at least if the extinction is fast enough). If you consider that pleasure has inherent positive value (more pleasure implies more positive value, same thing), why stopping at a fixed number of pleasure when you can create more pleasure by adding more worth-living lives? It’s more arbitrary.
If you consider that something has positive value, that typically implies that an universe with more of that thing is better. If you consider that preserving species has positive value then, ceteris paribus, an universe with more species preserved is a better universe. It’s the same thing.
Well, I suppose I would, especially if it meant going from no lives lived at all to some reasonable number of lives lived. “I don’t care” is unduly glib. I don’t care enough to do it if it had a major cost to me, definitely not given the number of lives already around.
I guess I’d be more likely to care somewhat more if those lives were diverse. Creating a million exactly identical lives seems less cool than creating just two significantly different ones. And the difference between a billion and a trillion is pretty unmoving to me, probably because I doubt the diversity of experiences among the trillion.
So long as I take reasonable care not to actively actualize a lot of people who are horribly unhappy on net, manipulating the number of future people doesn’t seem like some kind of moral imperative to me, more like an aesthetic preference to sculpt the future.
I’m definitely not responsible for people I don’t create, no matter what. I am responsible for any people I do create, but that responsibility is more in the nature of “not obviously screwing them over and throwing them into predictable hellscapes” than being absolutely sure they’ll all have fantastic lives.
I would actively resist packing the whole Universe with humans at the maximum just-barely-better-than-not-living density, because it’s just plain outright ugly. And I can’t even imagine how I could figure out an “optimal” density from the point of view of the experiences of the people involved, even if I were invested in nonexistent people.
I don’t feel like I can even formulate a preference between those choices. I don’t just mean that one is as good as the other. I mean that the whole question seems pointless and kind of doesn’t compute. I recognize that it does make some kind of sense in some way, but how am I supposed to form a preference about the past, especially when my preferences, or lack thereof, would be modified by the hypothetical-but-strictly-impossible enactment of that preference? What am I supposed to do with that kind of preference if I have it?
Anyway, if a given person doesn’t exist, in the strongest possible sense of nonexistence, where they don’t appear anywhere in the timeline, then that person doesn’t in fact have any preferences at all, regardless of what they “would” say in some hypothetical sense. You have to exist to prefer something. The nonexistent preferences of nonexistent people are, well… not exactly compelling?
I mean, if you want to go down that road, no matter what I do, I can only instantiate a finite number of people. If I don’t discount in some very harsh way for lack of diversity, that leaves an infinite number of people nonexistent. If I continue on the path of taking nonexistent people’s preferences into account; and I discover that even a “tiny” majority of those infinite nonexistent people “would” feel envy and spite for the people who do exist, and would want them not to exist; then should I take that infinite amount of preference into account, and make sure not to create anybody at all? Or should I maybe even just not create anybody at all out of simple fairness?
I think I have more than enough trouble taking even minimal care of even all the people who definitely do exist.
At a certain point the whole thing stops being interesting. And at a certain point after that, it just seems like a weird obsession. Especially if you’re giving up on other things. If you’ve populated all the galaxies but one, that last empty galaxy seems more valuable to me than adding however many people you can fit into it.
Also, what’s so great abotut humans specifically? If I wanted to maximize pleasure, shouldn’t I try to create a bunch of utility monsters that only feel pleasure, instead of wasting resources on humans whose pleasure is imperfect? If you want, it can be utility monsters with two capacities: to feel pleasure, and to in whatever sense you like prefer their own existence to their nonexistence. And if I do have to create humans, should I try to make them as close to those utility monsters as possible while still meeting the minimum definition of “human”?
I like cake. I don’t necessarily want to stuff the whole universe with cake (or paperclips). I can’t necessarily say exactly how much cake I want to have around, but it’s not “as much as possible”. Even if I can identify an optimal amount of anything to have, the optimum does not have to be the maximum.
… and, pattern matching on previous conversations and guessing where this one might go, I think that formalized ethical systems, where you try to derive what you “should” do using logical inference from some fixed set of principles, are pointless and often dangerous. That includes all the of “measure and maximize pleasure/utility” variants, especially if they require you to aggregate people’s utilities into a common metric.
There’s no a priori reason you should expect to be able to pull anything logically consistent out of a bunch of ad-hoc, evolved ethical intuitions, and experience suggests that you can’t do that. Everybody who tries seems to come up with something that has implications those same intuitions say are grossly monstrous. And in fact when somebody gets power and tries to really enact some rigid formalized system, the actual consequences tend to be monstrous.
“Humanclipping” the universe has that kind of feel for me.
I mean, pleasure[1] is a terminal value for most of us because we like it (and suffering because we dislike it), not ‘lifeless’ matter. I prefer to have animals existing than to have zero animals, if at least we can make sure that they typically enjoy themselves or it will lead to a state-of-affairs in which most enjoy themselves most of the time. This is the same for humans in specific.
I didn’t use the word ‘humans’ for a reason.
The reason we can say that “experience suggests that you can’t do that” is because we have some standard to judge it. We need a specific reason to say that is ‘monstrous’, just like you’ll give reasons for why any action is monstrous. In principle, no one needs to be wronged[2]. We can assume a deontological commitment to not kill any life or damage anyone if that’s what bothers you. Sure, you can say that we are arbitrarily weakening our consequentialist commitment, but I haven’t said at any point that it had to be ‘at all costs’ regardless (I know that I was commenting within the context of the article, but I’m speaking personally and I haven’t even read most of it).
[1] It doesn’t need to be literal (‘naive’) ‘pleasure’ with nothing else the thing that we optimise for.
[2]This is a hypothetical for a post-scarcity society, when you definitely have resources to spare and no one needs to be compromised to get a life into the world.