There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.