This is one of the most horrifying things I have ever read. Most of the commenters have done a good just of poking holes in it, but I thought I’d add my take on a few things.
This reluctance is generally not substantiated or clarified with anything other than “clearly, this isn’t what we want”.
If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings,
No, the correct thing for an FAI to do is to use some resources to increase the number of beings and some to increase the utility of existing beings. You are assuming that creating new beings does not have diminishing returns. I find this highly unlikely. Most activities generate less value the more we do them. I don’t see why this would change for creating new beings.
Having new creatures that enjoy life is certainly a good thing. But so is enhancing the life satisfaction of existing creatures. I don’t think one of these things is categorically more valuable than the other. I think they are both incrementally valuable.
In other words, as I’ve said before, the question is not, “Should we maximize total utility or average utility?” It’s “How many resources should be devoted to increasing total utility, and how many to increasing average utility?”
and then induce a state of permanent and ultimate enjoyment in every one of them.
Wouldn’t it be even more efficient to just create creatures that feel nothing except a vague preference to keep on existing, which is always satisfied?
Or maybe we shouldn’t try to minmax morality. Maybe we should understand that phrases like “maximize pleasure” and “maximize preference satisfaction” are just rules of thumb that reflect a deeper and more complex set of moral values.
The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly.
Again, you’re assuming all enjoyments are equivalent and don’t generate diminishing returns. Pleasure is valuable, but it has diminishing returns, you get more overall value by increasing lots of different kinds of positive things, not just pleasure.
In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.
If you’re right this is just proof that Christians are really bad at constructing Heaven. But I don’t think you are, most Christians I know think heaven is far more complex than just sitting around feeling good.
If you don’t want to be “reduced” to an eternal state of bliss, that’s tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state.
The alternative you suggest is a very good alternative. Creating all those blissful creatures would be a waste of valuable resources that could be used to enhance the preferences of already existing creatures. Again, creating new creatures is often a good thing, but it has diminishing returns.
Now for some rebuttals to your statements in the comments section:
If we all think it’s so great to be autonomous, to feel like we’re doing all of our own work, all of our own thinking, all of our own exploration—then why does anyone want to build an AI in the first place?
Again, complex values and diminishing returns. Autonomy is good, but if an FAI can help us obtain some other values it might be good to cede a little of our autonomy to it.
I find that comparable to a depressed person who doesn’t want to cure his depression, because it would “change who he is”. Well, yeah; but for the better.
It’s immoral and illegal to force people to medicate for a reason. That being said, depression isn’t a disease that changes what your desires are. It’s a disease that makes it harder to achieve your desires. If you cured it you’d be better at achieving your desires, which would be a good thing. If a cure radically changed what your desires were it would be a bad thing.
That being said, I wouldn’t necessarily object to rewiring humans so that we feel pleasure more easily,, as long as it fulfilled two conditions:
That pleasure must have a referent. You have to do something to trigger the reward center in order to feel it, stimulating the brain directly would be bad.
The increase must be proportional. I should still enjoy a good movie better then a bad movie, even if I enjoy them both a lot more.
Wei explains that most of the readership are preference utilitarians, who believe in satisfying people’s preferences, not maximizing pleasure.
That’s fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.
I don’t think that that it’s ethical, or possible to take into account the hypothetical preferences of nonexistant creatures. That’s not even a logically coherent concept. If a creature doesn’t exist, then it doesn’t have preferences. I don’t think it’s logically possible to prefer to exist if you don’t already. Besides, as I said before, it would be even more efficient to create a creature that can’t feel pleasure, that just has a vague preference to keep on existing that would always be satisfied as long as it existed. But I doubt you would want to do that.
Besides, for every hypothetical creature that wants to exist and feel pleasure, there’s another hypothetical creature that wants that creature to not exist, or feel pain. Why are we ignoring those creature’s preferences?
The only way preference utilitarianism can avoid the global maximum of Heaven is to ignore the preferences of potential creatures. But that is selfish.
No, it isn’t. Selfishness is when you severely thwart someone’s preferences to mildly enhance your own. It’s not selfish to thwart nonexistant preferences because they don’t exist. That’s like saying it’s gluttonous to eat nonexistant food, or vain to wear nonexistant costume jewelry.
The reason some people find the idea that you have to respect the preferences of all potential creatures is that they believe (correctly) that they have an obligation to make sure people who exist in the future will have satisfied preferences. But that isn’t because nonexistant people’s preference have weight. It’s because it’s good for whoever exists at the moment to have highly satisfied preferences, so as soon as a creature comes into existence you have a duty to make sure it is satisfied. And the reason those people’s preferences are highly satisfied should be that they are strong, powerful, and have lots of friends, not because they were genetically modified to have really really unambitious preferences.
If you don’t want Heaven, then you don’t want a universally friendly AI. What you really want is an AI that is friendly just to you.
I want a universally friendly AI, but since nonexistant creatures don’t exist in this universe, not creating them isn’t universally unfriendly.
Also, I find it highly suspect, to say the least, that you start by arguing for “Heaven” because you think that all human desires can be reduced to the desire to feel certain emotions, but then when the commenters have poked holes in that idea you suddenly change and use a completely different justification (the logically incoherent idea that we have to respect the nonexistant preferences of nonexistant people) to defend it.
The infinite universe argument can be used as an excuse to do pretty much anything. Why not just torture and kill everyone and everything in our Hubble volume? … If there are infinite copies of everyone and everything, then there’s no harm done.
I find it helpful to think of having a copy as a form of life extension, except done serially instead of linearly. An exact duplicate of you who lives for 70 years is similar to living an extra 70 years. So torturing everyone because they have duplicates would be equivalent to torturing someone for half their lifespan and then saying that it’s okay because they still have half a lifespan leftover.
Whatever happens outside of our Hubble volume has no consequence for us, and neither adds to nor alleviates our responsibility.
Again, if these creatures exist somewhere else, then if you create them you aren’t really creating them, you’re extending their lifespan. Now, having a long lifespan is one way of having a high quality of life, but it isn’t the only way, and it does have diminishing returns, especially when it’s serial instead of linear, and you don’t share your copy’s memories. So it seems logical that, in addition to focusing on making people live longer, we should increase their quality of life in other ways, such as devoting resources to making them richer and more satisfied.
This is one of the most horrifying things I have ever read. Most of the commenters have done a good just of poking holes in it, but I thought I’d add my take on a few things.
Some good and detailed explanations are here, here, here, here, and here.
No, the correct thing for an FAI to do is to use some resources to increase the number of beings and some to increase the utility of existing beings. You are assuming that creating new beings does not have diminishing returns. I find this highly unlikely. Most activities generate less value the more we do them. I don’t see why this would change for creating new beings.
Having new creatures that enjoy life is certainly a good thing. But so is enhancing the life satisfaction of existing creatures. I don’t think one of these things is categorically more valuable than the other. I think they are both incrementally valuable.
In other words, as I’ve said before, the question is not, “Should we maximize total utility or average utility?” It’s “How many resources should be devoted to increasing total utility, and how many to increasing average utility?”
Wouldn’t it be even more efficient to just create creatures that feel nothing except a vague preference to keep on existing, which is always satisfied?
Or maybe we shouldn’t try to minmax morality. Maybe we should understand that phrases like “maximize pleasure” and “maximize preference satisfaction” are just rules of thumb that reflect a deeper and more complex set of moral values.
Again, you’re assuming all enjoyments are equivalent and don’t generate diminishing returns. Pleasure is valuable, but it has diminishing returns, you get more overall value by increasing lots of different kinds of positive things, not just pleasure.
If you’re right this is just proof that Christians are really bad at constructing Heaven. But I don’t think you are, most Christians I know think heaven is far more complex than just sitting around feeling good.
The alternative you suggest is a very good alternative. Creating all those blissful creatures would be a waste of valuable resources that could be used to enhance the preferences of already existing creatures. Again, creating new creatures is often a good thing, but it has diminishing returns.
Now for some rebuttals to your statements in the comments section:
Again, complex values and diminishing returns. Autonomy is good, but if an FAI can help us obtain some other values it might be good to cede a little of our autonomy to it.
It’s immoral and illegal to force people to medicate for a reason. That being said, depression isn’t a disease that changes what your desires are. It’s a disease that makes it harder to achieve your desires. If you cured it you’d be better at achieving your desires, which would be a good thing. If a cure radically changed what your desires were it would be a bad thing.
That being said, I wouldn’t necessarily object to rewiring humans so that we feel pleasure more easily,, as long as it fulfilled two conditions:
That pleasure must have a referent. You have to do something to trigger the reward center in order to feel it, stimulating the brain directly would be bad.
The increase must be proportional. I should still enjoy a good movie better then a bad movie, even if I enjoy them both a lot more.
I don’t think that that it’s ethical, or possible to take into account the hypothetical preferences of nonexistant creatures. That’s not even a logically coherent concept. If a creature doesn’t exist, then it doesn’t have preferences. I don’t think it’s logically possible to prefer to exist if you don’t already. Besides, as I said before, it would be even more efficient to create a creature that can’t feel pleasure, that just has a vague preference to keep on existing that would always be satisfied as long as it existed. But I doubt you would want to do that.
Besides, for every hypothetical creature that wants to exist and feel pleasure, there’s another hypothetical creature that wants that creature to not exist, or feel pain. Why are we ignoring those creature’s preferences?
No, it isn’t. Selfishness is when you severely thwart someone’s preferences to mildly enhance your own. It’s not selfish to thwart nonexistant preferences because they don’t exist. That’s like saying it’s gluttonous to eat nonexistant food, or vain to wear nonexistant costume jewelry.
The reason some people find the idea that you have to respect the preferences of all potential creatures is that they believe (correctly) that they have an obligation to make sure people who exist in the future will have satisfied preferences. But that isn’t because nonexistant people’s preference have weight. It’s because it’s good for whoever exists at the moment to have highly satisfied preferences, so as soon as a creature comes into existence you have a duty to make sure it is satisfied. And the reason those people’s preferences are highly satisfied should be that they are strong, powerful, and have lots of friends, not because they were genetically modified to have really really unambitious preferences.
I want a universally friendly AI, but since nonexistant creatures don’t exist in this universe, not creating them isn’t universally unfriendly.
Also, I find it highly suspect, to say the least, that you start by arguing for “Heaven” because you think that all human desires can be reduced to the desire to feel certain emotions, but then when the commenters have poked holes in that idea you suddenly change and use a completely different justification (the logically incoherent idea that we have to respect the nonexistant preferences of nonexistant people) to defend it.
I find it helpful to think of having a copy as a form of life extension, except done serially instead of linearly. An exact duplicate of you who lives for 70 years is similar to living an extra 70 years. So torturing everyone because they have duplicates would be equivalent to torturing someone for half their lifespan and then saying that it’s okay because they still have half a lifespan leftover.
Again, if these creatures exist somewhere else, then if you create them you aren’t really creating them, you’re extending their lifespan. Now, having a long lifespan is one way of having a high quality of life, but it isn’t the only way, and it does have diminishing returns, especially when it’s serial instead of linear, and you don’t share your copy’s memories. So it seems logical that, in addition to focusing on making people live longer, we should increase their quality of life in other ways, such as devoting resources to making them richer and more satisfied.