Let’s see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.
In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.
So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a “semi-friendly” singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a “Sysop”) raises so many other issues, it might be better to think in terms of an “Aristoi”-like world where there’s a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge’s Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.
However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they’ve also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation—e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of “ems”. It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.
And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It’s also not something that is achieved in a single leap; to get there, you would have to traverse a whole “uncanny valley” of bad and failed emulations.
Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.
Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via “lifebox”. To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?
So this issue—autonomous personlike entities in society, which may or may not have subjective experience—is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.
At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.
Let’s see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.
In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.
So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a “semi-friendly” singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a “Sysop”) raises so many other issues, it might be better to think in terms of an “Aristoi”-like world where there’s a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge’s Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.
However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they’ve also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation—e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of “ems”. It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.
And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It’s also not something that is achieved in a single leap; to get there, you would have to traverse a whole “uncanny valley” of bad and failed emulations.
Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.
Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via “lifebox”. To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?
So this issue—autonomous personlike entities in society, which may or may not have subjective experience—is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.
At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.