Your question is a more complicated version of “what happens if I’m non-destructively copied”, and the answer to that one is that both of them are you, and so before the copying is done you should assign equal probability to “ending up as” the original or as the copy. (It should work the same as Everett branching.)
In this case, I don’t fully expect the “reconstructed from writings” self to be as connected to my current subjective experience as a cryopreserved self would be. But the mere fact of there being “two selves” doesn’t present an inherent problem.
If I understand the physics and the link even a little bit correctly, those copies would have to be identical to an arbitrarily high degree of specification. That identicalness would end soon (I’d imagine something like nanoseconds) after the new brain was generated (and I think it’s extremely charitable to posit that such a replication is meaningfully possible); it seems like even variations in local gravity would break the identity. Certainly, within a few seconds, processing necessarily different sensory data (as both copies can’t be observing from the exact same location) would make the two different. What happens to double-me at that point, or is that somehow not material?
Well, ISTM that only the gross structure (the cells, the strength of their connections, and the state of firing) is really essential to the relevant pattern. Advanced nanotechnology is theoretically more than capable of recording such data and constructing a copy, to within the accuracy of body-temperature thermal noise. (So if you really wanted to be careful, you’d put the brain in suspended animation at low temperature, copy it there, and warm both copies back up to normal; but I don’t think that would be necessary in practice.)
What happens to double-me at that point, or is that somehow not material?
Yup, the copies diverge. Just as there are different quantum versions of me branching as life goes along (see here for a relevant parable), my experience would branch there, with two people who once were “me”. When I observe a quantum random coinflip, half of future mes are in worlds where they observe heads and half are in worlds where they observe tails; they quickly become different people from each other, both of them remembering having been me-before-the-flip, and so it’s quite coherent for me to say before the flip that I expect to see heads with 1⁄2 probability and tails with 1⁄2 probability. The duplication experiment is no different, except that this time my branched copies have the chance to play chess against each other afterwards. I expect 1⁄2 probability of finding myself to be the one who remained in the scanning room (and who gets to play White), and 1⁄2 chance of finding myself to be the one who wakes in the construction room (and who gets to play Black).
This is somewhat redundant with my previous response, but suppose we have some superficial way to distinguish—i.e. you’re marked with something that doesn’t get copied. Why would you not expect to continue to have the experience associated with the physical object that is your brain, i.e. not wake up as the copy?
It’s also interesting that this assumes it’s meaningfully possible to replicate a brain, which is an unanswered empirical question. Even granted that the world is perfectly materialistic, it does not seem to follow that one can make a copy of a brain so perfect that one’s experience could jump from one to the other, so to speak. Sort of like Heisenberg’s uncertainty principle, but for brain replication.
...unless you’re referring to the situation where you wake up after an individual has been copied. In that case, it does seem like the odds you’re the original are 50⁄50. But if you’re the original going to the copying-lab, it seems like you should be virtually guaranteed to wake up in your own body, which will be confirmable if you give it some identifying mark beforehand (or ensure that it’s directed to a red room and the copy to a blue one, or whatever).
OK, so we do disagree on this fundamental level. I apologize for the following infodump, especially when it’s late at night for me...
I assign high probability to the patternist theory of consciousness: the thesis that the subjective thread of consciousness is not maintained by material continuity or a metaphysical soul, but by the patterned relations between the different complicated brain-states (or mind-moments, if we want to be less brain-chauvinistic). That is, you can identify the configuration that is my-brain-right-now (A1), and the configuration that is my-brain-a-millisecond-later (A2), and they’re connected in a way similar to the way that successive states of Conway’s Game of Life are connected. (Of course, there are multiple ways A1 could go, so throw in A2′, A2″, etc, but only a small subset of possible brain-configurations have a nonnegligible connection of this sort to A1.) Anyway, my estimate for “what I’ll experience next after A1” should just be a matter of counting all the A2s and variants in the multiverse, and comparing the measures of each.
This sounds weird to our evolved intuitions, but it appears to be the simplest theory of subjective experience which doesn’t involve extra metaphysical entities or new, heretofore unobserved, laws of physics. As noted in the link above, the notion of “material continuity” is a practical aggregate consequence which doesn’t cut to the way the universe actually works. Reality is made of configurations, not objects, and it would be unnatural to introduce a basic property for a substructure of a configuration (like A2) which wouldn’t hold for an identical substructure placed elsewhere in space and time. (Trivial properties like “location” obviously excepted, and completely historical-social properties like “the first nanotube of this length ever constructed” should be in a different category as well.)
The patternist theory of consciousness, incidentally, is basically assumed in the OP and in a good deal of the LW discussion of uploading and other such technologies.
I follow this general theory and mostly agree with it, though I admit it isn’t fully adapted into my thoughts on consciousness generally.
What I don’t see, exactly, is how “good enough” copies could work. (I also don’t see how identical copies could work, but that’s a practical issue, not a conceptual one.) Recreating someone who’s significantly more like me than most seems rather categorically different from freezing and later reactivating my brain, particularly since people who are significantly more like me than most probably already exist to some degree. At what degree does similarity cross some relevance threshold, if ever? Or have I misconstrued the issue?
At what degree does similarity cross some relevance threshold, if ever?
That’s precisely the issue at the heart of the current discussion, as I see it. And it’s on that issue that I’m uncertain. A copy of the cellular structure and activity of my brain is definitely good enough to carry on my conscious experience. Is a best-guess reconstruction of that structure from my written records good enough? I strongly suspect not, but it’s always dicey to say what a superintelligence couldn’t figure out from limited evidence.
Your question is a more complicated version of “what happens if I’m non-destructively copied”, and the answer to that one is that both of them are you, and so before the copying is done you should assign equal probability to “ending up as” the original or as the copy. (It should work the same as Everett branching.)
In this case, I don’t fully expect the “reconstructed from writings” self to be as connected to my current subjective experience as a cryopreserved self would be. But the mere fact of there being “two selves” doesn’t present an inherent problem.
It’s not a given that building this kind of probabilistic model is helpful. (Forgetful driver and beauty again.)
If I understand the physics and the link even a little bit correctly, those copies would have to be identical to an arbitrarily high degree of specification. That identicalness would end soon (I’d imagine something like nanoseconds) after the new brain was generated (and I think it’s extremely charitable to posit that such a replication is meaningfully possible); it seems like even variations in local gravity would break the identity. Certainly, within a few seconds, processing necessarily different sensory data (as both copies can’t be observing from the exact same location) would make the two different. What happens to double-me at that point, or is that somehow not material?
Well, ISTM that only the gross structure (the cells, the strength of their connections, and the state of firing) is really essential to the relevant pattern. Advanced nanotechnology is theoretically more than capable of recording such data and constructing a copy, to within the accuracy of body-temperature thermal noise. (So if you really wanted to be careful, you’d put the brain in suspended animation at low temperature, copy it there, and warm both copies back up to normal; but I don’t think that would be necessary in practice.)
Yup, the copies diverge. Just as there are different quantum versions of me branching as life goes along (see here for a relevant parable), my experience would branch there, with two people who once were “me”. When I observe a quantum random coinflip, half of future mes are in worlds where they observe heads and half are in worlds where they observe tails; they quickly become different people from each other, both of them remembering having been me-before-the-flip, and so it’s quite coherent for me to say before the flip that I expect to see heads with 1⁄2 probability and tails with 1⁄2 probability. The duplication experiment is no different, except that this time my branched copies have the chance to play chess against each other afterwards. I expect 1⁄2 probability of finding myself to be the one who remained in the scanning room (and who gets to play White), and 1⁄2 chance of finding myself to be the one who wakes in the construction room (and who gets to play Black).
This is somewhat redundant with my previous response, but suppose we have some superficial way to distinguish—i.e. you’re marked with something that doesn’t get copied. Why would you not expect to continue to have the experience associated with the physical object that is your brain, i.e. not wake up as the copy?
It’s also interesting that this assumes it’s meaningfully possible to replicate a brain, which is an unanswered empirical question. Even granted that the world is perfectly materialistic, it does not seem to follow that one can make a copy of a brain so perfect that one’s experience could jump from one to the other, so to speak. Sort of like Heisenberg’s uncertainty principle, but for brain replication.
...unless you’re referring to the situation where you wake up after an individual has been copied. In that case, it does seem like the odds you’re the original are 50⁄50. But if you’re the original going to the copying-lab, it seems like you should be virtually guaranteed to wake up in your own body, which will be confirmable if you give it some identifying mark beforehand (or ensure that it’s directed to a red room and the copy to a blue one, or whatever).
OK, so we do disagree on this fundamental level. I apologize for the following infodump, especially when it’s late at night for me...
I assign high probability to the patternist theory of consciousness: the thesis that the subjective thread of consciousness is not maintained by material continuity or a metaphysical soul, but by the patterned relations between the different complicated brain-states (or mind-moments, if we want to be less brain-chauvinistic). That is, you can identify the configuration that is my-brain-right-now (A1), and the configuration that is my-brain-a-millisecond-later (A2), and they’re connected in a way similar to the way that successive states of Conway’s Game of Life are connected. (Of course, there are multiple ways A1 could go, so throw in A2′, A2″, etc, but only a small subset of possible brain-configurations have a nonnegligible connection of this sort to A1.) Anyway, my estimate for “what I’ll experience next after A1” should just be a matter of counting all the A2s and variants in the multiverse, and comparing the measures of each.
This sounds weird to our evolved intuitions, but it appears to be the simplest theory of subjective experience which doesn’t involve extra metaphysical entities or new, heretofore unobserved, laws of physics. As noted in the link above, the notion of “material continuity” is a practical aggregate consequence which doesn’t cut to the way the universe actually works. Reality is made of configurations, not objects, and it would be unnatural to introduce a basic property for a substructure of a configuration (like A2) which wouldn’t hold for an identical substructure placed elsewhere in space and time. (Trivial properties like “location” obviously excepted, and completely historical-social properties like “the first nanotube of this length ever constructed” should be in a different category as well.)
The patternist theory of consciousness, incidentally, is basically assumed in the OP and in a good deal of the LW discussion of uploading and other such technologies.
I follow this general theory and mostly agree with it, though I admit it isn’t fully adapted into my thoughts on consciousness generally.
What I don’t see, exactly, is how “good enough” copies could work. (I also don’t see how identical copies could work, but that’s a practical issue, not a conceptual one.) Recreating someone who’s significantly more like me than most seems rather categorically different from freezing and later reactivating my brain, particularly since people who are significantly more like me than most probably already exist to some degree. At what degree does similarity cross some relevance threshold, if ever? Or have I misconstrued the issue?
That’s precisely the issue at the heart of the current discussion, as I see it. And it’s on that issue that I’m uncertain. A copy of the cellular structure and activity of my brain is definitely good enough to carry on my conscious experience. Is a best-guess reconstruction of that structure from my written records good enough? I strongly suspect not, but it’s always dicey to say what a superintelligence couldn’t figure out from limited evidence.