I’ll just comment on what most people are missing, because most reactions seem to be missing a similar thing.
Wei explains that most of the readership are preference utilitarians, who believe in satisfying people’s preferences, not maximizing pleasure.
That’s fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.
Given that potential creatures outnumber existing creatures many times over, the preferences of existing creatures—that we wish to selfishly keep the universe’s resources to ourselves, so we can explore and think and have misguided lofty impressions about ourselves, and whatnot—all of those preferences don’t count that much in the face of many more creatures that would prefer to exist, and be wireheaded, than not to exist at all.
The only way preference utilitarianism can avoid the global maximum of Heaven is to ignore the preferences of potential creatures. But that is selfish.
If you don’t want Heaven, then you don’t want a universally friendly AI. What you really want is an AI that is friendly just to you.
I doubt anyone here acts in a manner remotely similar to the way utilitarianism recommends. Utilitarianism is an unbiological conception about how to behave—and consequently is extremely difficult for real organisms to adhere to. Real organisms frequently engage in activities such as nepotism. Some people pay lip service to utilitarianism because it sounds nice and signals a moral nature—but they don’t actually adhere to it.
Eliezer posted an argument against taking into account the preferences of people who don’t exist. I think utilitarianism, in order to be consistent, perhaps does need to take into account those preferences, but it’s not clear how that would really work. What weights do you put on the utility functions of those non-existent creatures?
I don’t find Eliezer’s argument convincing. The infinite universe argument can be used as an excuse to do pretty much anything. Why not just torture and kill everyone and everything in our Hubble volume? Surely identical copies exist elsewhere. If there are infinite copies of everyone and everything, then there’s no harm done.
That doesn’t fly. Whatever happens outside of our Hubble volume has no consequence for us, and neither adds to nor alleviates our responsibility. Infinite universe or not, we are still responsible not just for what is, but also for what could be, in the space under our influence.
I’ll just comment on what most people are missing, because most reactions seem to be missing a similar thing.
Wei explains that most of the readership are preference utilitarians, who believe in satisfying people’s preferences, not maximizing pleasure.
That’s fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.
Given that potential creatures outnumber existing creatures many times over, the preferences of existing creatures—that we wish to selfishly keep the universe’s resources to ourselves, so we can explore and think and have misguided lofty impressions about ourselves, and whatnot—all of those preferences don’t count that much in the face of many more creatures that would prefer to exist, and be wireheaded, than not to exist at all.
The only way preference utilitarianism can avoid the global maximum of Heaven is to ignore the preferences of potential creatures. But that is selfish.
If you don’t want Heaven, then you don’t want a universally friendly AI. What you really want is an AI that is friendly just to you.
I doubt anyone here acts in a manner remotely similar to the way utilitarianism recommends. Utilitarianism is an unbiological conception about how to behave—and consequently is extremely difficult for real organisms to adhere to. Real organisms frequently engage in activities such as nepotism. Some people pay lip service to utilitarianism because it sounds nice and signals a moral nature—but they don’t actually adhere to it.
Eliezer posted an argument against taking into account the preferences of people who don’t exist. I think utilitarianism, in order to be consistent, perhaps does need to take into account those preferences, but it’s not clear how that would really work. What weights do you put on the utility functions of those non-existent creatures?
I don’t find Eliezer’s argument convincing. The infinite universe argument can be used as an excuse to do pretty much anything. Why not just torture and kill everyone and everything in our Hubble volume? Surely identical copies exist elsewhere. If there are infinite copies of everyone and everything, then there’s no harm done.
That doesn’t fly. Whatever happens outside of our Hubble volume has no consequence for us, and neither adds to nor alleviates our responsibility. Infinite universe or not, we are still responsible not just for what is, but also for what could be, in the space under our influence.