Yes, I’ve read this in science fiction too. Do you want me to write my own science fiction in opposition to yours, about the monadic physics of the soul and the sinister society of the zombie uploads? It would be a stirring tale about the rise of a culture brainwashed to believe that simulations of mind are the same as the real thing, and the race against time to prevent it from implementing its deluded utopia by force.
Telling a vivid little story about how things would be if so-and-so were true, is not actually an argument in favor of the proposition. The relevant LW buzzword is fictional evidence.
Yes, I think your writing a “counter-fiction” would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it. I admit this is a fact about my own state of knowledge, and I would like it if you could at least show me an example of a fictional universe where you were proven right, as I have shown an account of a fictional universe where you are proven wrong.
I don’t intend for the story to serve as any kind of evidence, but I did intend for it to serve as an argument. If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a “mind” outside of the mechanistic brain? If it turns out that humans and their simulations both behave and think in exactly the same fashion?
Again, it’s not fictional evidence, it’s me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.
it’s not fictional evidence, it’s me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.
If it’s a question of why I believe what I do, the starting point is described here, in the section on being wrong about one’s phenomenology. Telling me that colors aren’t there at any level is telling me that my color phenomenology doesn’t exist. That’s like telling you, not just that you’re not having a conversation on lesswrong, but that you are not even hallucinating the occurrence of such a conversation. There are hard limits to the sort of doubt one can credibly engage in about what is happening to oneself at the level of appearance, and the abolition of color lies way beyond those limits, out in the land of “what if 2+2 actually equals 5?”
The next step is the insistence that such colors are not contained in physical ontology, and so a standard materialism is really a dualism, which will associate colors (and other ingredients of experience) with some material entity or property, but which cannot legitimately identify them with it. I think that ultimately this is straightforward—the mathematical ontologies of standard physics are completely explicit, it’s obvious what they’re made of, and you just won’t get color out of something like a big logical conjunction of positional properties—but the arguments are intricate because every conceivable attempt to avoid that conclusion is deployed. So if you want arguments for this step, I’m sure you can find them in Chalmers and other philosophers.
Then there’s my personal alternative to dualism. The existence of an alternative, as a palpable possibility if not a demonstrated reality, certainly helps me in my stubbornness. Otherwise I would just be left insisting that phenomenal ontology is definitely different to physical ontology, and historically that usually leads to advocacy of dualism, though starting in the late 19th century you had people talking about a monistic alternative—“panpsychism”, later Russell’s “neutral monism”. There’s surely an important issue buried here, something about the capacity of people to see that something is true, though it runs against their other beliefs, in the absence of an alternative set of beliefs that would explain the problematic truth. It’s definitely easier to insist on the reality of an inconvenient phenomenon when you have a candidate explanation; but one would think that, ideally, this shouldn’t be necessary.
your writing a “counter-fiction” would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it.
It shouldn’t require a story to convey the idea. Or rather, a story would not be the best vehicle, because it’s actually a mathematical idea. You would know that when we look at the world, we see individuated particles, but at the level of quantum wavefunctions, we have, not just a wavefunction per particle, but entangled wavefunctions for several particles at once, that can’t be factorized into a “product” of single-particle wavefunctions (instead, such entangled wavefunctions are sums, superpositions, of distinct product wavefunctions). One aspect of the dispute about quantum reality is whether the supreme entangled wavefunction of the universe is the reality, whether it’s just the particles, whether it’s some combination. But we could also speculate that the reality is something in between—that reality consists of lots of single particles, and then occasional complex entities which we would currently call entangled sets of particles. You could write down an exact specification of such an ontology; it would be a bit like what they call an objective-collapse ontology, except that the “collapses” or “quantum jumps” are potentially between entangled multiparticle states, not just localized single-particle states.
My concept is that the self is a single humongous “multi-particle state” somewhere in the brain, and the “lifeworld” (mentioned at the start of this post) is wholly contained within that state. This way we avoid the dualistic association between conscious state and computational state, in favor of an exact identity between conscious state and physical state. The isomorphism does not involve, on the physical side, a coarse-grained state machine, so here it can be an identity. When I’m not just defending the reality of color (etc) and the impossibility of identifying it with functional states, this is the model that I’m elaborating.
So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor’s dynamics in an entity which ontologically consists of a zillion of the simple tensor factors (a classical computer). In other words, whether the state machine is realized within a single tensor factor or a distributed causal network of them, is what determines whether it is conscious or not.
If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a “mind” outside of the mechanistic brain?
I was asked this some time ago—if I found myself to be an upload, as in your scenario, how would that affect my beliefs? To the extent that I believed what was going on, I would have to start considering Chalmers-style dualism.
So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor’s dynamics in an entity which ontologically consists of a zillion of the simple tensor factors
I’m more interested in the part in the fiction where the heroes realize that the people they’ve lived with their whole lives in their revealed-to-be-dystopian future, who’ve had an upload brain prosthesis after some traumatic injury or disease, are actually p-zombies. How do they find this out, exactly? And how do they deal with there being all these people, who might be their friends, parents or children, who are on all social and cognitive accounts exactly like them, who they are now convinced lack a subjective experience?
Let’s see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.
In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.
So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a “semi-friendly” singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a “Sysop”) raises so many other issues, it might be better to think in terms of an “Aristoi”-like world where there’s a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge’s Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.
However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they’ve also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation—e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of “ems”. It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.
And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It’s also not something that is achieved in a single leap; to get there, you would have to traverse a whole “uncanny valley” of bad and failed emulations.
Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.
Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via “lifebox”. To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?
So this issue—autonomous personlike entities in society, which may or may not have subjective experience—is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.
At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.
Yes, I’ve read this in science fiction too. Do you want me to write my own science fiction in opposition to yours, about the monadic physics of the soul and the sinister society of the zombie uploads? It would be a stirring tale about the rise of a culture brainwashed to believe that simulations of mind are the same as the real thing, and the race against time to prevent it from implementing its deluded utopia by force.
Telling a vivid little story about how things would be if so-and-so were true, is not actually an argument in favor of the proposition. The relevant LW buzzword is fictional evidence.
Yes, I think your writing a “counter-fiction” would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it. I admit this is a fact about my own state of knowledge, and I would like it if you could at least show me an example of a fictional universe where you were proven right, as I have shown an account of a fictional universe where you are proven wrong.
I don’t intend for the story to serve as any kind of evidence, but I did intend for it to serve as an argument. If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a “mind” outside of the mechanistic brain? If it turns out that humans and their simulations both behave and think in exactly the same fashion?
Again, it’s not fictional evidence, it’s me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.
If it’s a question of why I believe what I do, the starting point is described here, in the section on being wrong about one’s phenomenology. Telling me that colors aren’t there at any level is telling me that my color phenomenology doesn’t exist. That’s like telling you, not just that you’re not having a conversation on lesswrong, but that you are not even hallucinating the occurrence of such a conversation. There are hard limits to the sort of doubt one can credibly engage in about what is happening to oneself at the level of appearance, and the abolition of color lies way beyond those limits, out in the land of “what if 2+2 actually equals 5?”
The next step is the insistence that such colors are not contained in physical ontology, and so a standard materialism is really a dualism, which will associate colors (and other ingredients of experience) with some material entity or property, but which cannot legitimately identify them with it. I think that ultimately this is straightforward—the mathematical ontologies of standard physics are completely explicit, it’s obvious what they’re made of, and you just won’t get color out of something like a big logical conjunction of positional properties—but the arguments are intricate because every conceivable attempt to avoid that conclusion is deployed. So if you want arguments for this step, I’m sure you can find them in Chalmers and other philosophers.
Then there’s my personal alternative to dualism. The existence of an alternative, as a palpable possibility if not a demonstrated reality, certainly helps me in my stubbornness. Otherwise I would just be left insisting that phenomenal ontology is definitely different to physical ontology, and historically that usually leads to advocacy of dualism, though starting in the late 19th century you had people talking about a monistic alternative—“panpsychism”, later Russell’s “neutral monism”. There’s surely an important issue buried here, something about the capacity of people to see that something is true, though it runs against their other beliefs, in the absence of an alternative set of beliefs that would explain the problematic truth. It’s definitely easier to insist on the reality of an inconvenient phenomenon when you have a candidate explanation; but one would think that, ideally, this shouldn’t be necessary.
It shouldn’t require a story to convey the idea. Or rather, a story would not be the best vehicle, because it’s actually a mathematical idea. You would know that when we look at the world, we see individuated particles, but at the level of quantum wavefunctions, we have, not just a wavefunction per particle, but entangled wavefunctions for several particles at once, that can’t be factorized into a “product” of single-particle wavefunctions (instead, such entangled wavefunctions are sums, superpositions, of distinct product wavefunctions). One aspect of the dispute about quantum reality is whether the supreme entangled wavefunction of the universe is the reality, whether it’s just the particles, whether it’s some combination. But we could also speculate that the reality is something in between—that reality consists of lots of single particles, and then occasional complex entities which we would currently call entangled sets of particles. You could write down an exact specification of such an ontology; it would be a bit like what they call an objective-collapse ontology, except that the “collapses” or “quantum jumps” are potentially between entangled multiparticle states, not just localized single-particle states.
My concept is that the self is a single humongous “multi-particle state” somewhere in the brain, and the “lifeworld” (mentioned at the start of this post) is wholly contained within that state. This way we avoid the dualistic association between conscious state and computational state, in favor of an exact identity between conscious state and physical state. The isomorphism does not involve, on the physical side, a coarse-grained state machine, so here it can be an identity. When I’m not just defending the reality of color (etc) and the impossibility of identifying it with functional states, this is the model that I’m elaborating.
So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor’s dynamics in an entity which ontologically consists of a zillion of the simple tensor factors (a classical computer). In other words, whether the state machine is realized within a single tensor factor or a distributed causal network of them, is what determines whether it is conscious or not.
I was asked this some time ago—if I found myself to be an upload, as in your scenario, how would that affect my beliefs? To the extent that I believed what was going on, I would have to start considering Chalmers-style dualism.
I’m more interested in the part in the fiction where the heroes realize that the people they’ve lived with their whole lives in their revealed-to-be-dystopian future, who’ve had an upload brain prosthesis after some traumatic injury or disease, are actually p-zombies. How do they find this out, exactly? And how do they deal with there being all these people, who might be their friends, parents or children, who are on all social and cognitive accounts exactly like them, who they are now convinced lack a subjective experience?
Let’s see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.
In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.
So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a “semi-friendly” singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a “Sysop”) raises so many other issues, it might be better to think in terms of an “Aristoi”-like world where there’s a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge’s Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.
However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they’ve also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation—e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of “ems”. It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.
And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It’s also not something that is achieved in a single leap; to get there, you would have to traverse a whole “uncanny valley” of bad and failed emulations.
Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.
Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via “lifebox”. To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?
So this issue—autonomous personlike entities in society, which may or may not have subjective experience—is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.
At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.
One possible counter fiction would have an ending similar to the bad ending of three worlds collide.