I’m very curious as to your theory of what happens if you do both. That is, suppose you’re cryogenically frozen and then revived, while someone also makes a top-notch copy of you based on the recorded memories you left behind. It seems rather obvious that you can’t have double-you, so what happens?
This hypothetical suggests to me that one or both are doomed - and if it’s just one, I’d think it’s this method you’ve suggested that wouldn’t work. But I really haven’t thought too hard on this issue, so I’m curious as to what others think the solution/outcome is.
There’s no mechanism linking the two entities, so it seems necessary that whatever that each entity has a distinct first-person experience. Whoever “you” are, then, you can’t experience being both entities. I think that’s the cleanest way to express what I mean, and thank you for calling me on using “obvious.”
Another way of thinking about this: Suppose someone offers to make 1 million essentially perfect copies of you and subject them to the best life they can engineer for them, which you get to confirm fits perfectly with your own values. The catch: prior to the copy, they’ll paint a 1 on your forehead, which will not be copied. They’ll then find “you” and subject the “original” to endless torture. I, for one, would not hesitate to reject this offer for largely self-interested reasons. I can understand an altruist taking it, though that makes the fact that the million people are copies rather irrelevant. If I understand the stance of many people here (RH, for example), they’d take the deal out of self interest (at least some number of copies, which could be greater than a million), because they don’t distinguish between copies. This seems like severely flawed reasoning, though too complex to properly address in a sub-sub-comment. I’d like to know if this is a straw man.
In general, I find that continuity of consciousness is an illusion that’s hard-wired into us for self-preservational purposes. We can explain the mind without needing to define some sort of entity that remains the same from our birth to death, and any attempted definition for such an entity gets more and more convoluted as you try to consistently answer questions like “if you lose all your memories, is it still you”, “if you get disassembled and then rebuilt, is that still you” and “how can you at 5 be the same person as you at 50”. It’s a bit like believing in a soul.
Still, the concept of a ‘you’ has various e.g. legal and social uses, and it’s still a relatively well-defined concept for as long as you don’t try to consider various weird cases. Once we have a world where people can be copied, however, the folk-psychological concept of “you” pretty much becomes incoherent and arbitrary. Which still doesn’t force you to completely abandon the concept, of course—you can arbitrarily define it however you wish.
As for your thought experiment, there are at two interpretations that make sense to me. One is that since every copy will have experiences and memories identical to being me and there are a million of them, then there’s a 1⁄1,000,000 chance for me to “become” any particular one of the copies. Correspondingly, there’s a 1⁄1,000,000 chance that I’ll be tortured. The other interpretation is that there is a 100% chance that I will “become” each of the copies, so a 100% that I’ll become the one that is eternally tortured and a 100% chance that I’ll also become the 999,999 others.
Alternatively, you could also say that there’s a 100% chance that I’ll remain the one who had “1” painted on his forehead. Or that I’ll become all of the copies whose number happens to be a prime. Or whatever. Identity is arbitrary in such a scenario, so which one is “correct” depends pretty much only on your taste.
Your question is a more complicated version of “what happens if I’m non-destructively copied”, and the answer to that one is that both of them are you, and so before the copying is done you should assign equal probability to “ending up as” the original or as the copy. (It should work the same as Everett branching.)
In this case, I don’t fully expect the “reconstructed from writings” self to be as connected to my current subjective experience as a cryopreserved self would be. But the mere fact of there being “two selves” doesn’t present an inherent problem.
If I understand the physics and the link even a little bit correctly, those copies would have to be identical to an arbitrarily high degree of specification. That identicalness would end soon (I’d imagine something like nanoseconds) after the new brain was generated (and I think it’s extremely charitable to posit that such a replication is meaningfully possible); it seems like even variations in local gravity would break the identity. Certainly, within a few seconds, processing necessarily different sensory data (as both copies can’t be observing from the exact same location) would make the two different. What happens to double-me at that point, or is that somehow not material?
Well, ISTM that only the gross structure (the cells, the strength of their connections, and the state of firing) is really essential to the relevant pattern. Advanced nanotechnology is theoretically more than capable of recording such data and constructing a copy, to within the accuracy of body-temperature thermal noise. (So if you really wanted to be careful, you’d put the brain in suspended animation at low temperature, copy it there, and warm both copies back up to normal; but I don’t think that would be necessary in practice.)
What happens to double-me at that point, or is that somehow not material?
Yup, the copies diverge. Just as there are different quantum versions of me branching as life goes along (see here for a relevant parable), my experience would branch there, with two people who once were “me”. When I observe a quantum random coinflip, half of future mes are in worlds where they observe heads and half are in worlds where they observe tails; they quickly become different people from each other, both of them remembering having been me-before-the-flip, and so it’s quite coherent for me to say before the flip that I expect to see heads with 1⁄2 probability and tails with 1⁄2 probability. The duplication experiment is no different, except that this time my branched copies have the chance to play chess against each other afterwards. I expect 1⁄2 probability of finding myself to be the one who remained in the scanning room (and who gets to play White), and 1⁄2 chance of finding myself to be the one who wakes in the construction room (and who gets to play Black).
This is somewhat redundant with my previous response, but suppose we have some superficial way to distinguish—i.e. you’re marked with something that doesn’t get copied. Why would you not expect to continue to have the experience associated with the physical object that is your brain, i.e. not wake up as the copy?
It’s also interesting that this assumes it’s meaningfully possible to replicate a brain, which is an unanswered empirical question. Even granted that the world is perfectly materialistic, it does not seem to follow that one can make a copy of a brain so perfect that one’s experience could jump from one to the other, so to speak. Sort of like Heisenberg’s uncertainty principle, but for brain replication.
...unless you’re referring to the situation where you wake up after an individual has been copied. In that case, it does seem like the odds you’re the original are 50⁄50. But if you’re the original going to the copying-lab, it seems like you should be virtually guaranteed to wake up in your own body, which will be confirmable if you give it some identifying mark beforehand (or ensure that it’s directed to a red room and the copy to a blue one, or whatever).
OK, so we do disagree on this fundamental level. I apologize for the following infodump, especially when it’s late at night for me...
I assign high probability to the patternist theory of consciousness: the thesis that the subjective thread of consciousness is not maintained by material continuity or a metaphysical soul, but by the patterned relations between the different complicated brain-states (or mind-moments, if we want to be less brain-chauvinistic). That is, you can identify the configuration that is my-brain-right-now (A1), and the configuration that is my-brain-a-millisecond-later (A2), and they’re connected in a way similar to the way that successive states of Conway’s Game of Life are connected. (Of course, there are multiple ways A1 could go, so throw in A2′, A2″, etc, but only a small subset of possible brain-configurations have a nonnegligible connection of this sort to A1.) Anyway, my estimate for “what I’ll experience next after A1” should just be a matter of counting all the A2s and variants in the multiverse, and comparing the measures of each.
This sounds weird to our evolved intuitions, but it appears to be the simplest theory of subjective experience which doesn’t involve extra metaphysical entities or new, heretofore unobserved, laws of physics. As noted in the link above, the notion of “material continuity” is a practical aggregate consequence which doesn’t cut to the way the universe actually works. Reality is made of configurations, not objects, and it would be unnatural to introduce a basic property for a substructure of a configuration (like A2) which wouldn’t hold for an identical substructure placed elsewhere in space and time. (Trivial properties like “location” obviously excepted, and completely historical-social properties like “the first nanotube of this length ever constructed” should be in a different category as well.)
The patternist theory of consciousness, incidentally, is basically assumed in the OP and in a good deal of the LW discussion of uploading and other such technologies.
I follow this general theory and mostly agree with it, though I admit it isn’t fully adapted into my thoughts on consciousness generally.
What I don’t see, exactly, is how “good enough” copies could work. (I also don’t see how identical copies could work, but that’s a practical issue, not a conceptual one.) Recreating someone who’s significantly more like me than most seems rather categorically different from freezing and later reactivating my brain, particularly since people who are significantly more like me than most probably already exist to some degree. At what degree does similarity cross some relevance threshold, if ever? Or have I misconstrued the issue?
At what degree does similarity cross some relevance threshold, if ever?
That’s precisely the issue at the heart of the current discussion, as I see it. And it’s on that issue that I’m uncertain. A copy of the cellular structure and activity of my brain is definitely good enough to carry on my conscious experience. Is a best-guess reconstruction of that structure from my written records good enough? I strongly suspect not, but it’s always dicey to say what a superintelligence couldn’t figure out from limited evidence.
I’m very curious as to your theory of what happens if you do both. That is, suppose you’re cryogenically frozen and then revived, while someone also makes a top-notch copy of you based on the recorded memories you left behind. It seems rather obvious that you can’t have double-you, so what happens?
This hypothetical suggests to me that one or both are doomed - and if it’s just one, I’d think it’s this method you’ve suggested that wouldn’t work. But I really haven’t thought too hard on this issue, so I’m curious as to what others think the solution/outcome is.
Why not?
There’s no mechanism linking the two entities, so it seems necessary that whatever that each entity has a distinct first-person experience. Whoever “you” are, then, you can’t experience being both entities. I think that’s the cleanest way to express what I mean, and thank you for calling me on using “obvious.”
Another way of thinking about this: Suppose someone offers to make 1 million essentially perfect copies of you and subject them to the best life they can engineer for them, which you get to confirm fits perfectly with your own values. The catch: prior to the copy, they’ll paint a 1 on your forehead, which will not be copied. They’ll then find “you” and subject the “original” to endless torture. I, for one, would not hesitate to reject this offer for largely self-interested reasons. I can understand an altruist taking it, though that makes the fact that the million people are copies rather irrelevant. If I understand the stance of many people here (RH, for example), they’d take the deal out of self interest (at least some number of copies, which could be greater than a million), because they don’t distinguish between copies. This seems like severely flawed reasoning, though too complex to properly address in a sub-sub-comment. I’d like to know if this is a straw man.
In general, I find that continuity of consciousness is an illusion that’s hard-wired into us for self-preservational purposes. We can explain the mind without needing to define some sort of entity that remains the same from our birth to death, and any attempted definition for such an entity gets more and more convoluted as you try to consistently answer questions like “if you lose all your memories, is it still you”, “if you get disassembled and then rebuilt, is that still you” and “how can you at 5 be the same person as you at 50”. It’s a bit like believing in a soul.
Still, the concept of a ‘you’ has various e.g. legal and social uses, and it’s still a relatively well-defined concept for as long as you don’t try to consider various weird cases. Once we have a world where people can be copied, however, the folk-psychological concept of “you” pretty much becomes incoherent and arbitrary. Which still doesn’t force you to completely abandon the concept, of course—you can arbitrarily define it however you wish.
As for your thought experiment, there are at two interpretations that make sense to me. One is that since every copy will have experiences and memories identical to being me and there are a million of them, then there’s a 1⁄1,000,000 chance for me to “become” any particular one of the copies. Correspondingly, there’s a 1⁄1,000,000 chance that I’ll be tortured. The other interpretation is that there is a 100% chance that I will “become” each of the copies, so a 100% that I’ll become the one that is eternally tortured and a 100% chance that I’ll also become the 999,999 others.
Alternatively, you could also say that there’s a 100% chance that I’ll remain the one who had “1” painted on his forehead. Or that I’ll become all of the copies whose number happens to be a prime. Or whatever. Identity is arbitrary in such a scenario, so which one is “correct” depends pretty much only on your taste.
Your question is a more complicated version of “what happens if I’m non-destructively copied”, and the answer to that one is that both of them are you, and so before the copying is done you should assign equal probability to “ending up as” the original or as the copy. (It should work the same as Everett branching.)
In this case, I don’t fully expect the “reconstructed from writings” self to be as connected to my current subjective experience as a cryopreserved self would be. But the mere fact of there being “two selves” doesn’t present an inherent problem.
It’s not a given that building this kind of probabilistic model is helpful. (Forgetful driver and beauty again.)
If I understand the physics and the link even a little bit correctly, those copies would have to be identical to an arbitrarily high degree of specification. That identicalness would end soon (I’d imagine something like nanoseconds) after the new brain was generated (and I think it’s extremely charitable to posit that such a replication is meaningfully possible); it seems like even variations in local gravity would break the identity. Certainly, within a few seconds, processing necessarily different sensory data (as both copies can’t be observing from the exact same location) would make the two different. What happens to double-me at that point, or is that somehow not material?
Well, ISTM that only the gross structure (the cells, the strength of their connections, and the state of firing) is really essential to the relevant pattern. Advanced nanotechnology is theoretically more than capable of recording such data and constructing a copy, to within the accuracy of body-temperature thermal noise. (So if you really wanted to be careful, you’d put the brain in suspended animation at low temperature, copy it there, and warm both copies back up to normal; but I don’t think that would be necessary in practice.)
Yup, the copies diverge. Just as there are different quantum versions of me branching as life goes along (see here for a relevant parable), my experience would branch there, with two people who once were “me”. When I observe a quantum random coinflip, half of future mes are in worlds where they observe heads and half are in worlds where they observe tails; they quickly become different people from each other, both of them remembering having been me-before-the-flip, and so it’s quite coherent for me to say before the flip that I expect to see heads with 1⁄2 probability and tails with 1⁄2 probability. The duplication experiment is no different, except that this time my branched copies have the chance to play chess against each other afterwards. I expect 1⁄2 probability of finding myself to be the one who remained in the scanning room (and who gets to play White), and 1⁄2 chance of finding myself to be the one who wakes in the construction room (and who gets to play Black).
This is somewhat redundant with my previous response, but suppose we have some superficial way to distinguish—i.e. you’re marked with something that doesn’t get copied. Why would you not expect to continue to have the experience associated with the physical object that is your brain, i.e. not wake up as the copy?
It’s also interesting that this assumes it’s meaningfully possible to replicate a brain, which is an unanswered empirical question. Even granted that the world is perfectly materialistic, it does not seem to follow that one can make a copy of a brain so perfect that one’s experience could jump from one to the other, so to speak. Sort of like Heisenberg’s uncertainty principle, but for brain replication.
...unless you’re referring to the situation where you wake up after an individual has been copied. In that case, it does seem like the odds you’re the original are 50⁄50. But if you’re the original going to the copying-lab, it seems like you should be virtually guaranteed to wake up in your own body, which will be confirmable if you give it some identifying mark beforehand (or ensure that it’s directed to a red room and the copy to a blue one, or whatever).
OK, so we do disagree on this fundamental level. I apologize for the following infodump, especially when it’s late at night for me...
I assign high probability to the patternist theory of consciousness: the thesis that the subjective thread of consciousness is not maintained by material continuity or a metaphysical soul, but by the patterned relations between the different complicated brain-states (or mind-moments, if we want to be less brain-chauvinistic). That is, you can identify the configuration that is my-brain-right-now (A1), and the configuration that is my-brain-a-millisecond-later (A2), and they’re connected in a way similar to the way that successive states of Conway’s Game of Life are connected. (Of course, there are multiple ways A1 could go, so throw in A2′, A2″, etc, but only a small subset of possible brain-configurations have a nonnegligible connection of this sort to A1.) Anyway, my estimate for “what I’ll experience next after A1” should just be a matter of counting all the A2s and variants in the multiverse, and comparing the measures of each.
This sounds weird to our evolved intuitions, but it appears to be the simplest theory of subjective experience which doesn’t involve extra metaphysical entities or new, heretofore unobserved, laws of physics. As noted in the link above, the notion of “material continuity” is a practical aggregate consequence which doesn’t cut to the way the universe actually works. Reality is made of configurations, not objects, and it would be unnatural to introduce a basic property for a substructure of a configuration (like A2) which wouldn’t hold for an identical substructure placed elsewhere in space and time. (Trivial properties like “location” obviously excepted, and completely historical-social properties like “the first nanotube of this length ever constructed” should be in a different category as well.)
The patternist theory of consciousness, incidentally, is basically assumed in the OP and in a good deal of the LW discussion of uploading and other such technologies.
I follow this general theory and mostly agree with it, though I admit it isn’t fully adapted into my thoughts on consciousness generally.
What I don’t see, exactly, is how “good enough” copies could work. (I also don’t see how identical copies could work, but that’s a practical issue, not a conceptual one.) Recreating someone who’s significantly more like me than most seems rather categorically different from freezing and later reactivating my brain, particularly since people who are significantly more like me than most probably already exist to some degree. At what degree does similarity cross some relevance threshold, if ever? Or have I misconstrued the issue?
That’s precisely the issue at the heart of the current discussion, as I see it. And it’s on that issue that I’m uncertain. A copy of the cellular structure and activity of my brain is definitely good enough to carry on my conscious experience. Is a best-guess reconstruction of that structure from my written records good enough? I strongly suspect not, but it’s always dicey to say what a superintelligence couldn’t figure out from limited evidence.