The one safe bet is that we’ll be trying to maximize our future values, but in the emulated brains scenario, it’s very hard to guess at what those values would be. It’s easy to underestimate our present kneejerk egalitarianism: We all think that being a human on its own entitles you to continued existence. Some will accept an exception in the case of heinous murderers, but even this is controversial. A human being ceasing to exist for some preventable reason is not just generally considered a bad thing. It’s one of the worst things.
Like most people, I don’t expect that this value will be fully extended to emulated individuals. I do think it’s worth having a discussion about what aspects of it might survive into the emulated minds future. Some of it surely will.
I’ve seen some (e.g. Marxists) argue that these fuzzy values questions just don’t matter, because economic incentives will always trump them. But the way I see it, the society that finally produces the tech for emulated minds will be the wealthiest and most prosperous human society in history. Historical trends say that they will take the basic right to a comfortable human life even more seriously than we do now, and they will have the means to basically guarantee it for the ~9 billion humans. What is it that these future people will lack but want—something that emulated minds could give them—which will be judged to be more valuable than staying true to a deeply held ethical principle? Faster scientific progress, better entertainment, more security and more stuff? I know that this is not a perfect analogy, but consider that eugenic programs could now advance all of these goals, albeit slowly and inefficiently. So imagine how much faster and more promising eugenics would have to be before we resolve to just go for it despite our ethical misgivings? The trend I see is that the richer we get, the more repugnant it seems. In a richer world, a larger share of our priorities is overtly ethical. The rich people who turn brain scans into sentient emulations will be living in an intensely ethical society. Futurists must guess their ethical priorities, because these really will matter to outcomes.
I’ll throw out two possibilities, chosen for brevity and not plausibility: 1. Emulations will be seen only as a means of human immortality, and de novo minds that are not one-to-one continuous with humans will simply not exist. 2. We’ll develop strong intuitions that for programs, “he’s dead” and “he’s not running” are importantly different (cue parrot sketch).
There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.
It’s a tragedy of the commons combined with selection pressures. If there is just a few people who decide to spread out and make as many copies as possible, then there will be slightly more of those people in the next generation. Those new multipliers will copy themselves in turn. Eventually, the population is swamped by individuals who favor unrestrained reproduction. This happens even if it is a very slight effect (if 99% of the world thinks it’s good to only have one copy a year, and 1% usually only makes one copy but every ten years has an extra, given enough time, the vast majority of the population has 1.1 copies a year). The population balloons, and we don’t have that tremendous wealth per captia anymore.
Do you expect the future to be as Bostrom describes in this section, if the world is not taken over by a single superintelligence?
The one safe bet is that we’ll be trying to maximize our future values, but in the emulated brains scenario, it’s very hard to guess at what those values would be. It’s easy to underestimate our present kneejerk egalitarianism: We all think that being a human on its own entitles you to continued existence. Some will accept an exception in the case of heinous murderers, but even this is controversial. A human being ceasing to exist for some preventable reason is not just generally considered a bad thing. It’s one of the worst things.
Like most people, I don’t expect that this value will be fully extended to emulated individuals. I do think it’s worth having a discussion about what aspects of it might survive into the emulated minds future. Some of it surely will.
I’ve seen some (e.g. Marxists) argue that these fuzzy values questions just don’t matter, because economic incentives will always trump them. But the way I see it, the society that finally produces the tech for emulated minds will be the wealthiest and most prosperous human society in history. Historical trends say that they will take the basic right to a comfortable human life even more seriously than we do now, and they will have the means to basically guarantee it for the ~9 billion humans. What is it that these future people will lack but want—something that emulated minds could give them—which will be judged to be more valuable than staying true to a deeply held ethical principle? Faster scientific progress, better entertainment, more security and more stuff? I know that this is not a perfect analogy, but consider that eugenic programs could now advance all of these goals, albeit slowly and inefficiently. So imagine how much faster and more promising eugenics would have to be before we resolve to just go for it despite our ethical misgivings? The trend I see is that the richer we get, the more repugnant it seems. In a richer world, a larger share of our priorities is overtly ethical. The rich people who turn brain scans into sentient emulations will be living in an intensely ethical society. Futurists must guess their ethical priorities, because these really will matter to outcomes.
I’ll throw out two possibilities, chosen for brevity and not plausibility: 1. Emulations will be seen only as a means of human immortality, and de novo minds that are not one-to-one continuous with humans will simply not exist. 2. We’ll develop strong intuitions that for programs, “he’s dead” and “he’s not running” are importantly different (cue parrot sketch).
There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.
It’s a tragedy of the commons combined with selection pressures. If there is just a few people who decide to spread out and make as many copies as possible, then there will be slightly more of those people in the next generation. Those new multipliers will copy themselves in turn. Eventually, the population is swamped by individuals who favor unrestrained reproduction. This happens even if it is a very slight effect (if 99% of the world thinks it’s good to only have one copy a year, and 1% usually only makes one copy but every ten years has an extra, given enough time, the vast majority of the population has 1.1 copies a year). The population balloons, and we don’t have that tremendous wealth per captia anymore.