Is it possible even in principle to perform a “consciousness transfer” from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer “software” from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require a complete brain transplant? (If so, how would neural structures found in the second body heal & connect with the transplanted brain so that a functional central nervous system results?) Wouldn’t the person whose consciousness is being transferred experience some sort of personality change due to “inhabiting” a structurally different brain or body?
Is this whole hypothesis just an artifact of reminiscent introjected mind-body dualism, not compatible with modern science? Does the science world even know enough about consciousness and the brain to be able to answer this question?
I’m asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality. Unfortunately, I lack the necessary background knowledge to think coherently about this idea, so I figured there are many people on LW who don’t, and could explain to me whether this whole idea makes sense.
I don’t think anybody has hard evidence of answers to any of those questions yet (though I’d be fascinated to learn otherwise) but I can offer some conjectures:
Possible in principle?
Yes. I see no evidence that sentience and identity are anything other than information stored in the nervous system, and in theory the cognitive portion of a nervous system an organ and could be transplanted like any other.
Preserving anatomical integrity?
Not with anything like current science. We can take non-intrusive brain scans, but they’re pretty low-resolution and (so far as I know) strictly read-only. Even simply stimulating parts of the brain isn’t enough to basically re-write it in such a way that it becomes another person’s brain.
Need to kill donors?
To the best of my knowledge, it’s theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that’s still a potential human—the vegetativeness needs to be reversible for this to be useful—so the ethics are still highly questionable. It’s probably possible to do it without a full brain at all, which seems less evil if you can somehow do it my some mechanism other than what amounts to a pre-natal full lobotomy, but would require the physical brain transplant option for transference.
Nerves connecting and healing?
Nerves can repair themselves, though it’s usually extremely slow. Stem cell therapies have potential here, though. Connecting the brain to the rest of the body is a lot of nerves, but they’re pretty much all sensory and motor nerves so far as I know; the brain itself is fairly self-contained
Personality change?
That depends on how different the new body is from the old, I would guess. The obviously-preferable body is a clone, for many reasons including avoiding the need to avoid immune system rejection of the new brain. Personality is always going to be somewhat externally-driven, so I wouldn’t expect somebody transferred from a 90-year-old body to a 20-year-old one to have the same personality regardless of any other information because the body will just be younger. On the other hand, if you use a clone body that’s the same age as the transferee, it wouldn’t shock me if the personality didn’t actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.
Now, mind you, I’m no brain surgeon (or medical professional of any sort), nor have I studied any significant amount of psychology. Nor am I a philosopher (see my question above). However, I don’t really see how the mind could be anything except a characteristic of the body. Altering (intentionally or otherwise) the part of the body responsible for thought alters the mind. Our current attempted maps of the mind don’t come close to fully representing the territory, but I firmly believe it is mappable. Whether an existing one is re-mappable I can’t say, but the idea of transplanting a brain has been explored in science fiction for decades, and in theory I see ne logical reason why it couldn’t work.
To the best of my knowledge, it’s theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time.
I don’t think this is currently possible. The body just wouldn’t work. A large part of the ‘wiring’ during infant and childhood is connecting body parts and functions with higher and higher level concepts. Think about toilet training. You aren’t even aware of how it works but it nonetheless somehow connects large scale planning (how urgent is it, when and where are toilets) to the actual control of the organs. Considering how differnt minds (including the connection to the body) are I think the minimum requirement (short of signularity-level interventions) is an identical twin.
That said I think the existing techniques for transferring motion from one brain to another combined with advanced hypnosis and drugs could conceivably developed to a point where it is possible to transfer noticable parts of your identity over to another body—at least over an extended period of time where the new brain ‘learn’ to be you. To also transfer memory is camparably easy. Whether the result can be called ‘you’ or is sufficiently alike to you is another question.
Need to kill donors? To the best of my knowledge, it’s theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that’s still a potential human—the vegetativeness needs to be reversible for this to be useful—so the ethics are still highly questionable.
That’s how I pictured it, yes. At this point I wouldn’t concern myself with the ethics of it, because, if our technology advances this much, then simply the fact that humanity can perform such a feat is an extremely positive thing, and probably the end of death as we know it. What worries me more is that this wouldn’t result in a functional mature individual. For instance: in order to develop the muscular system, the body’s skeletal muscles would have to experience some sort of stress, i.e. be used. If you grow the organism in a jar from birth to consciousness transfer (as is probably most ethical), it wouldn’t have moved at all its entire life up to that point, and would therefore have extremely weak musculature. What to do in the meantime, electrically stimulate the muscles? Maybe, but it probably wouldn’t have results comparable to natural usage. Besides, there are probably many other body subsystems that would suffer similarly without much you could do about it. See Gunnar Zarncke’s comment below.
On the other hand, if you use a clone body that’s the same age as the transferee, it wouldn’t shock me if the personality didn’t actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.
Yes, but I imagine most uses to be related to rejuvenation. It would mean that the genetic info required for cloning would have to be gathered basically at birth (and the cloning process begun shortly thereafter), and there would still be a 9-month age difference. There’s little point in growing a backup clone for an organism so soon after birth. An age difference of 20 years between person and clone seems more reasonable.
In order to provide a definite answer to this question, we’d need to know how the brain produces consciousness and personality, as well as the exact mechanism of the upload(e.g., can it rewire synapses?).
Not exactly true; we probably don’t need to know how consciousness arises. We would certainly have to rewire synapses to match the original brain, and it is likely that if we exactly replicate brain structure neuron by neuron, synapse by synapse, we would still not know where consciousness lies, but would have a conscious duplicate of the original.
Alternatively you could hypothesize a soul, but that seems like worry for worry’s sake.
The flip side to this is that there is no measurable difference between ‘someone who is you and feels conscious’ and ‘someone who is exactly like you in every way but does not feel conscious (but will continue to claim that e does)’. Even if you identified a mental state on a brain scan that you felt certain that was causing the experience of consciousness, in order to approximate a proof of this you would have to be able to measure a group of subjects that are nearly identical except not experiencing consciousness, a group that has not yet been found in nature.
Can you transfer “software” from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism?
This can already be done via the senses. This also transfers consciousness of the content that is being transferred. What would consciousness without content look like?
There no such thing as “purely informational” when it comes to brains.
I’m asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality.
If you want to focus on that problem it’s likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.
There no such thing as “purely informational” when it comes to brains.
It’s good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers.
If you want to focus on that problem it’s likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.
It occurred to me too, but I’m not sure this is the definite conclusion. Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn’t leave the patient in as good a state as simply moving to a 20-year-old body.
It’s good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers.
Brains are no computers.
Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn’t leave the patient in as good a state as simply moving to a 20-year-old body.
Of course you wouldn’t only heal one severage disease. You would also lengthen telomeres and do all sorts of other things that reduce aging effects.
Suppose all the memories in one person were wiped and replaced with your memories. I believe the new body would claim to be you. It would introspect as you might now, and find your memories as its own, and say “I am Dahlen in a new body.”
But would it be you? If the copying had been non-destructive, then Dahlen in the old body still exists and would “know” on meeting Dahlen in the new body that Dahlen in the new body was really someone else who just got all Dahlen’s memories up to that point.
Meanwhile, Dahlen in the new body would have capabilities, moods, reactions, which would depend on the substrate more than the memories. The functional parts of the brain, the wiring-other-than-memories as it were, would be different in the new body. Dahlen in the new body would probably behave in ways that were similar to how the old body with its old memories behaved. It would still think it was Dahlen, but as Dahlen in the old body might think, that would just be its opinion and obviously it is mistaken.
As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.
Would anybody other than Dahlen in the old body have a reason to doubt that Dahlen in the new body wasn’t really Dahlen? I don’t think so, and especially Dahlen in the new body would probably be pretty sure it was Dahlen, even if it claimed to rationally understand how it might not be. It would know it was somebody, and wouldn’t be able to come up with any other compelling idea for who it was other than Dahlen.
I understand all this. And it’s precisely the sort of personality preservation that I find largely useless and would like to avoid. I’m not talking about copying memories from one brain to another; I’m talking about preserving the sense of self in such a way that the person undergoing this procedure would have the following subjective experience: be anesthetized (probably), undergo surgery (because I picture it as some form of surgery), “wake up in new body”. (The old body would likely get buried, because the whole purpose of performing such a transfer would be to save dying—very old or terminally ill—people’s lives.) There would be only one extant copy of that person’s memories, and yet they wouldn’t “die”; there would be the same sort of continuity of self experienced by people before and after going to sleep. The one who would “die” is technically the person in the body which constitutes the recipient of the transfer (who may have been grown just for this purpose and kept unconscious its whole life). That’s what I mean. Think of it as more or less what happens to the main character in the movie Avatar.
I realize the whole thing doesn’t sound very scientific, but have I managed to get my point across?
As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.
Yes, but… Everybody’s physiological basis for feelings is more or less the same; granted, there are structural differences that cause variation in innate personality traits and other mental functions, and a different brain might employ the body’s neurotransmitter reserve in different ways (I think), but the whole system is sufficiently similar from human to human that we can relate to each other’s experiences. There would be differences, and the differences would cause the person to behave differently in the “new body” than it did in the “old body”, but I don’t think one would have to move the glands or limbic system or what-have-you in addition to just the brain.
I understand what you are going for. And I present the following problem with it.
Dahlen A is put to unconscious. While A is unconscious memories are completely copied to unconscious body B. Dahlen B is woken up. Your scenario is fulfilled, Dahlen B has entirely the memories of being put to sleep in body A and waking up in body B. Dahlen B examines his memories and sees no gap in his existence other than the “normal” one of the anesthesis to render Dahlen A unconscious. Your desires for a transfer scenario are fulfilled!
Scenario 1: Dahlen A is killed while unconscious and body disposed of. Nothing ever interferes with the perception of Dahlen A and everyone around that there has been a transfer of consciousness from Dahlen A to Dahlen B.
Scenario 2: A few days later Dahlen A is woken up. Dahlen A of course has the sense of continuous consciousness just as he would if he had undergone a gall bladder surgery. Dahlen A and Dahlen B are brought together with other friends of Dahlen. Dahlen A is introspectively sure that he is the “real” Dahlen and no transfer ever took place. Dahlen B is introspectively sure that he is the “real” Dahlen and that a transfer did take place.
Your scenario assumes that there can be only one Dahlen. That the essence of Dahlen is a unique thing in the universe, and that it cannot be copied so that there are two. I think this assumption is false. I think if you make a “good enough” copy of Dahlen that you will have two essences of Dahlen, and that at no point does a single essence of Dahlen exist, and move from one body to another.
Further, if I am right and the essence of Dahlen can be copied, multiplied, and each possessor of a copy has the complete introspective property of seeing that it is in fact Dahlen, then it is unscientific to think that in the absence of copying, that your day to day existence is anything more than this. That each day you wake up, each moment you experience, your “continuity” is something you experience subjectively as a current state due to your examination of your memories. More important, your continuity is NOT something “real,” not something which either other observers, or even yourself and your copies introspecting from within the brain of Dahlen A, B, C etc. can ever distinguish from “real” continuity vs just the sense of continuity which follows from a good quality memory copy.
That there is a single essence of Dahlen which normally stays in one body, but which can be moved from one body to another, or into a machine, I believe is a false assumption, and that it is falsified by these thought experiments. As much as you and I might like to believe there is an essential continuity which we preserve as long as we stay alive, a rational examination of how we experience that continuity shows that it is not a real continuity, that copies could be created which would experience that continuity in as real a sense as the original whether or not the original is kept around.
By this reasoning, isn’t it okay to kill someone (or at least to kill them in their sleep)? After all, if everyone’s life is a constant sequence of different entities, what you’re killing would have ceased existing anyway. You’re just preventing a new entity from coming into existence. But preventing a new entity from coming into existence isn’t murder, even if the new entity resembles a previous one.
By this reasoning, isn’t it okay to kill someone (or at least to kill them in their sleep)?
You tell me.
If you don’t like the moral implications of a certain hypothesis, this should have precisely zero effect on your estimation of the probability that this hypothesis is correct. The entire history of the growing acceptance of evolution as a “true” theory follows precisely this course. Many people HATED the implication that man is just another animal. That a sentiment for morality evolved because groups in which that sentiment existed were able to out-compete groups in which that sentiment was weaker. That the statue of David or the theory of General Relativity, or the love you feel for your mother or your dog arise as a consequence, ultimately, of mindless random variations producing populations from which some do better than others and pass down the variations they have to the next generation.
So if the implications of the continuity of consciousness are morally distasteful to you, do not make the mistake of thinking that makes them any less likely to be true. A study of science and scientific progress should cure you of this very human tendency.
If your reasoning implies ~X, then X implies that your reasoning is wrong. And if X implies that your reasoning is wrong, then evidence for X is evidence against your reasoning.
In other words, you have no idea what you are talking about. The fact that something has “distasteful implications” (that is, that it implies ~X, and there is evidence for X) does mean it is less likely to be true.
Historically, the hypothesis that the earth orbited the sun had the distasteful implications that we were not the center of the universe. Galileo was prosecuted for this belief and recanted it under threat. I am surprised that you think the distasteful implications for this belief were evidence that the earth did not in fact orbit the sun.
Historically the hypothesis that humans evolved from non-human animals had the distasteful implications that humans had not been created by god in his image and provided with immortal souls by god. I am surprised that you consider this distaste to be evidence that evolution is an incorrect theory of the origin of species, including our own.
This is a rationality message board, devoted to, among other things, listing the common mistakes that humans make in trying to determine the truth. I would have bet dollars against donuts that rejecting the truth of a hypothesis because its implications were distasteful would have been an obvious candidate for that list, and I would have apparently lost.
If you had reason to believe that the Earth is the center of the universe, the fact that orbiting the sun contradicts that is evidence against the Earth orbiting the sun. It is related to proof by contradiction; if your premises lead you to a contradictory conclusion, then one of your premises is bad. And if one of your premises is something in which you are justified in having extremely high confidence, such as “there is such a thing as murder”, it’s probably the other premise that needs to be discarded.
I am surprised that you consider this distaste to be evidence that evolution is an incorrect theory of the origin of species
If you have reason to believe that humans have souls, and evolution implies that they don’t, that is evidence against evolution. Of course, how good that is as evidence against evolution depends on how good your reason is to believe that humans have souls. In the case of souls, that isn’t really very good.
Evidence that killing is wrong is certainly possible, but your statement “I think that killing is wrong” is such weak evidence that it is fair for us to dismiss it. You may provide reasons why we should think killing is wrong, and maybe we will accept your reasons, but so far you have not given us anything worth considering.
I think that you are also equivocating on the word ‘imply’, suggesting that ‘distasteful implications’ means something like ‘logical implications’.
The task you describe, at least the part where no whole brain transplant is involved, can be divided into two parts: 1) extracting the essential information about your mind from your brain, and 2) implanting that same information back into another brain.
Either of these could be achieved in two radically different ways: a) psychologically, i.e. by interview or memoir writing on the extraction side and “brain-washing” on the implanting side, or b) technologically, i.e. by functional MRI, electro-encephalography, etc on the extraction side. It is hard for me to envision a technological implantation method.
Either way, it seems to me that once we understand the mind enough to do any of this, it will turn out the easiest to just do the extraction part and then simulate the mind on a computer, instead of implanting it into a new body. Eliminate the wetware, and gain the benefit of regular backups, copious copies, and Moore’s law for increasing effectiveness. Also, this would be ethically much more tractable.
It seems to me this could also be the solution to the unfriendly AI problem. What if the AI are us? Then yielding the world to them would not be so much of a problem, suddenly.
psychologically, i.e. by interview or memoir writing on the extraction side and “brain-washing” on the implanting side,
I would expect recreating a mind from interviews and memoirs to be about as accurate as building a car based on interviews and memoirs written by someone who had driven cars. which is to say, the part of our mind that talks and writes is not noted for its brilliant and detailed insight into how the vast majority of the mind works.
I suppose it boils down to what you include when you say “mind”. I think the part of our mind that talks and writes is not very different from the part that thinks. So, if you narrowly, but reasonably, define the “mind” as only the conscious, thinking part of our personality, it might not be so farfetched to think a reasonable reconstruction of it from writings is possible.
Thought and language are closely related. Ask yourself: How many of my thoughts could I put into language, given a good effort? My gut feeling is “most of them”, but I could be wrong. The same goes for memories. If a memory can not be expressed, can it even be called a memory?
Is it possible even in principle to perform a “consciousness transfer” from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer “software” from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require a complete brain transplant? (If so, how would neural structures found in the second body heal & connect with the transplanted brain so that a functional central nervous system results?) Wouldn’t the person whose consciousness is being transferred experience some sort of personality change due to “inhabiting” a structurally different brain or body?
Is this whole hypothesis just an artifact of reminiscent introjected mind-body dualism, not compatible with modern science? Does the science world even know enough about consciousness and the brain to be able to answer this question?
I’m asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality. Unfortunately, I lack the necessary background knowledge to think coherently about this idea, so I figured there are many people on LW who don’t, and could explain to me whether this whole idea makes sense.
I don’t think anybody has hard evidence of answers to any of those questions yet (though I’d be fascinated to learn otherwise) but I can offer some conjectures:
Possible in principle? Yes. I see no evidence that sentience and identity are anything other than information stored in the nervous system, and in theory the cognitive portion of a nervous system an organ and could be transplanted like any other.
Preserving anatomical integrity? Not with anything like current science. We can take non-intrusive brain scans, but they’re pretty low-resolution and (so far as I know) strictly read-only. Even simply stimulating parts of the brain isn’t enough to basically re-write it in such a way that it becomes another person’s brain.
Need to kill donors? To the best of my knowledge, it’s theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that’s still a potential human—the vegetativeness needs to be reversible for this to be useful—so the ethics are still highly questionable. It’s probably possible to do it without a full brain at all, which seems less evil if you can somehow do it my some mechanism other than what amounts to a pre-natal full lobotomy, but would require the physical brain transplant option for transference.
Nerves connecting and healing? Nerves can repair themselves, though it’s usually extremely slow. Stem cell therapies have potential here, though. Connecting the brain to the rest of the body is a lot of nerves, but they’re pretty much all sensory and motor nerves so far as I know; the brain itself is fairly self-contained
Personality change? That depends on how different the new body is from the old, I would guess. The obviously-preferable body is a clone, for many reasons including avoiding the need to avoid immune system rejection of the new brain. Personality is always going to be somewhat externally-driven, so I wouldn’t expect somebody transferred from a 90-year-old body to a 20-year-old one to have the same personality regardless of any other information because the body will just be younger. On the other hand, if you use a clone body that’s the same age as the transferee, it wouldn’t shock me if the personality didn’t actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.
Now, mind you, I’m no brain surgeon (or medical professional of any sort), nor have I studied any significant amount of psychology. Nor am I a philosopher (see my question above). However, I don’t really see how the mind could be anything except a characteristic of the body. Altering (intentionally or otherwise) the part of the body responsible for thought alters the mind. Our current attempted maps of the mind don’t come close to fully representing the territory, but I firmly believe it is mappable. Whether an existing one is re-mappable I can’t say, but the idea of transplanting a brain has been explored in science fiction for decades, and in theory I see ne logical reason why it couldn’t work.
I don’t think this is currently possible. The body just wouldn’t work. A large part of the ‘wiring’ during infant and childhood is connecting body parts and functions with higher and higher level concepts. Think about toilet training. You aren’t even aware of how it works but it nonetheless somehow connects large scale planning (how urgent is it, when and where are toilets) to the actual control of the organs. Considering how differnt minds (including the connection to the body) are I think the minimum requirement (short of signularity-level interventions) is an identical twin.
That said I think the existing techniques for transferring motion from one brain to another combined with advanced hypnosis and drugs could conceivably developed to a point where it is possible to transfer noticable parts of your identity over to another body—at least over an extended period of time where the new brain ‘learn’ to be you. To also transfer memory is camparably easy. Whether the result can be called ‘you’ or is sufficiently alike to you is another question.
That’s how I pictured it, yes. At this point I wouldn’t concern myself with the ethics of it, because, if our technology advances this much, then simply the fact that humanity can perform such a feat is an extremely positive thing, and probably the end of death as we know it. What worries me more is that this wouldn’t result in a functional mature individual. For instance: in order to develop the muscular system, the body’s skeletal muscles would have to experience some sort of stress, i.e. be used. If you grow the organism in a jar from birth to consciousness transfer (as is probably most ethical), it wouldn’t have moved at all its entire life up to that point, and would therefore have extremely weak musculature. What to do in the meantime, electrically stimulate the muscles? Maybe, but it probably wouldn’t have results comparable to natural usage. Besides, there are probably many other body subsystems that would suffer similarly without much you could do about it. See Gunnar Zarncke’s comment below.
Yes, but I imagine most uses to be related to rejuvenation. It would mean that the genetic info required for cloning would have to be gathered basically at birth (and the cloning process begun shortly thereafter), and there would still be a 9-month age difference. There’s little point in growing a backup clone for an organism so soon after birth. An age difference of 20 years between person and clone seems more reasonable.
In order to provide a definite answer to this question, we’d need to know how the brain produces consciousness and personality, as well as the exact mechanism of the upload(e.g., can it rewire synapses?).
Not exactly true; we probably don’t need to know how consciousness arises. We would certainly have to rewire synapses to match the original brain, and it is likely that if we exactly replicate brain structure neuron by neuron, synapse by synapse, we would still not know where consciousness lies, but would have a conscious duplicate of the original.
Alternatively you could hypothesize a soul, but that seems like worry for worry’s sake.
The flip side to this is that there is no measurable difference between ‘someone who is you and feels conscious’ and ‘someone who is exactly like you in every way but does not feel conscious (but will continue to claim that e does)’. Even if you identified a mental state on a brain scan that you felt certain that was causing the experience of consciousness, in order to approximate a proof of this you would have to be able to measure a group of subjects that are nearly identical except not experiencing consciousness, a group that has not yet been found in nature.
This can already be done via the senses. This also transfers consciousness of the content that is being transferred. What would consciousness without content look like?
There no such thing as “purely informational” when it comes to brains.
If you want to focus on that problem it’s likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.
It’s good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers.
It occurred to me too, but I’m not sure this is the definite conclusion. Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn’t leave the patient in as good a state as simply moving to a 20-year-old body.
Brains are no computers.
Of course you wouldn’t only heal one severage disease. You would also lengthen telomeres and do all sorts of other things that reduce aging effects.
Suppose all the memories in one person were wiped and replaced with your memories. I believe the new body would claim to be you. It would introspect as you might now, and find your memories as its own, and say “I am Dahlen in a new body.”
But would it be you? If the copying had been non-destructive, then Dahlen in the old body still exists and would “know” on meeting Dahlen in the new body that Dahlen in the new body was really someone else who just got all Dahlen’s memories up to that point.
Meanwhile, Dahlen in the new body would have capabilities, moods, reactions, which would depend on the substrate more than the memories. The functional parts of the brain, the wiring-other-than-memories as it were, would be different in the new body. Dahlen in the new body would probably behave in ways that were similar to how the old body with its old memories behaved. It would still think it was Dahlen, but as Dahlen in the old body might think, that would just be its opinion and obviously it is mistaken.
As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.
Would anybody other than Dahlen in the old body have a reason to doubt that Dahlen in the new body wasn’t really Dahlen? I don’t think so, and especially Dahlen in the new body would probably be pretty sure it was Dahlen, even if it claimed to rationally understand how it might not be. It would know it was somebody, and wouldn’t be able to come up with any other compelling idea for who it was other than Dahlen.
I understand all this. And it’s precisely the sort of personality preservation that I find largely useless and would like to avoid. I’m not talking about copying memories from one brain to another; I’m talking about preserving the sense of self in such a way that the person undergoing this procedure would have the following subjective experience: be anesthetized (probably), undergo surgery (because I picture it as some form of surgery), “wake up in new body”. (The old body would likely get buried, because the whole purpose of performing such a transfer would be to save dying—very old or terminally ill—people’s lives.) There would be only one extant copy of that person’s memories, and yet they wouldn’t “die”; there would be the same sort of continuity of self experienced by people before and after going to sleep. The one who would “die” is technically the person in the body which constitutes the recipient of the transfer (who may have been grown just for this purpose and kept unconscious its whole life). That’s what I mean. Think of it as more or less what happens to the main character in the movie Avatar.
I realize the whole thing doesn’t sound very scientific, but have I managed to get my point across?
Yes, but… Everybody’s physiological basis for feelings is more or less the same; granted, there are structural differences that cause variation in innate personality traits and other mental functions, and a different brain might employ the body’s neurotransmitter reserve in different ways (I think), but the whole system is sufficiently similar from human to human that we can relate to each other’s experiences. There would be differences, and the differences would cause the person to behave differently in the “new body” than it did in the “old body”, but I don’t think one would have to move the glands or limbic system or what-have-you in addition to just the brain.
I understand what you are going for. And I present the following problem with it.
Dahlen A is put to unconscious. While A is unconscious memories are completely copied to unconscious body B. Dahlen B is woken up. Your scenario is fulfilled, Dahlen B has entirely the memories of being put to sleep in body A and waking up in body B. Dahlen B examines his memories and sees no gap in his existence other than the “normal” one of the anesthesis to render Dahlen A unconscious. Your desires for a transfer scenario are fulfilled!
Scenario 1: Dahlen A is killed while unconscious and body disposed of. Nothing ever interferes with the perception of Dahlen A and everyone around that there has been a transfer of consciousness from Dahlen A to Dahlen B.
Scenario 2: A few days later Dahlen A is woken up. Dahlen A of course has the sense of continuous consciousness just as he would if he had undergone a gall bladder surgery. Dahlen A and Dahlen B are brought together with other friends of Dahlen. Dahlen A is introspectively sure that he is the “real” Dahlen and no transfer ever took place. Dahlen B is introspectively sure that he is the “real” Dahlen and that a transfer did take place.
Your scenario assumes that there can be only one Dahlen. That the essence of Dahlen is a unique thing in the universe, and that it cannot be copied so that there are two. I think this assumption is false. I think if you make a “good enough” copy of Dahlen that you will have two essences of Dahlen, and that at no point does a single essence of Dahlen exist, and move from one body to another.
Further, if I am right and the essence of Dahlen can be copied, multiplied, and each possessor of a copy has the complete introspective property of seeing that it is in fact Dahlen, then it is unscientific to think that in the absence of copying, that your day to day existence is anything more than this. That each day you wake up, each moment you experience, your “continuity” is something you experience subjectively as a current state due to your examination of your memories. More important, your continuity is NOT something “real,” not something which either other observers, or even yourself and your copies introspecting from within the brain of Dahlen A, B, C etc. can ever distinguish from “real” continuity vs just the sense of continuity which follows from a good quality memory copy.
That there is a single essence of Dahlen which normally stays in one body, but which can be moved from one body to another, or into a machine, I believe is a false assumption, and that it is falsified by these thought experiments. As much as you and I might like to believe there is an essential continuity which we preserve as long as we stay alive, a rational examination of how we experience that continuity shows that it is not a real continuity, that copies could be created which would experience that continuity in as real a sense as the original whether or not the original is kept around.
By this reasoning, isn’t it okay to kill someone (or at least to kill them in their sleep)? After all, if everyone’s life is a constant sequence of different entities, what you’re killing would have ceased existing anyway. You’re just preventing a new entity from coming into existence. But preventing a new entity from coming into existence isn’t murder, even if the new entity resembles a previous one.
You tell me.
If you don’t like the moral implications of a certain hypothesis, this should have precisely zero effect on your estimation of the probability that this hypothesis is correct. The entire history of the growing acceptance of evolution as a “true” theory follows precisely this course. Many people HATED the implication that man is just another animal. That a sentiment for morality evolved because groups in which that sentiment existed were able to out-compete groups in which that sentiment was weaker. That the statue of David or the theory of General Relativity, or the love you feel for your mother or your dog arise as a consequence, ultimately, of mindless random variations producing populations from which some do better than others and pass down the variations they have to the next generation.
So if the implications of the continuity of consciousness are morally distasteful to you, do not make the mistake of thinking that makes them any less likely to be true. A study of science and scientific progress should cure you of this very human tendency.
If your reasoning implies ~X, then X implies that your reasoning is wrong. And if X implies that your reasoning is wrong, then evidence for X is evidence against your reasoning.
In other words, you have no idea what you are talking about. The fact that something has “distasteful implications” (that is, that it implies ~X, and there is evidence for X) does mean it is less likely to be true.
Help me out, readers.
The fact that something has distasteful implications means it is less likely to be true.
[pollid:802]
Historically, the hypothesis that the earth orbited the sun had the distasteful implications that we were not the center of the universe. Galileo was prosecuted for this belief and recanted it under threat. I am surprised that you think the distasteful implications for this belief were evidence that the earth did not in fact orbit the sun.
Historically the hypothesis that humans evolved from non-human animals had the distasteful implications that humans had not been created by god in his image and provided with immortal souls by god. I am surprised that you consider this distaste to be evidence that evolution is an incorrect theory of the origin of species, including our own.
This is a rationality message board, devoted to, among other things, listing the common mistakes that humans make in trying to determine the truth. I would have bet dollars against donuts that rejecting the truth of a hypothesis because its implications were distasteful would have been an obvious candidate for that list, and I would have apparently lost.
If you had reason to believe that the Earth is the center of the universe, the fact that orbiting the sun contradicts that is evidence against the Earth orbiting the sun. It is related to proof by contradiction; if your premises lead you to a contradictory conclusion, then one of your premises is bad. And if one of your premises is something in which you are justified in having extremely high confidence, such as “there is such a thing as murder”, it’s probably the other premise that needs to be discarded.
If you have reason to believe that humans have souls, and evolution implies that they don’t, that is evidence against evolution. Of course, how good that is as evidence against evolution depends on how good your reason is to believe that humans have souls. In the case of souls, that isn’t really very good.
Evidence that killing is wrong is certainly possible, but your statement “I think that killing is wrong” is such weak evidence that it is fair for us to dismiss it. You may provide reasons why we should think killing is wrong, and maybe we will accept your reasons, but so far you have not given us anything worth considering.
I think that you are also equivocating on the word ‘imply’, suggesting that ‘distasteful implications’ means something like ‘logical implications’.
The task you describe, at least the part where no whole brain transplant is involved, can be divided into two parts: 1) extracting the essential information about your mind from your brain, and 2) implanting that same information back into another brain.
Either of these could be achieved in two radically different ways: a) psychologically, i.e. by interview or memoir writing on the extraction side and “brain-washing” on the implanting side, or b) technologically, i.e. by functional MRI, electro-encephalography, etc on the extraction side. It is hard for me to envision a technological implantation method.
Either way, it seems to me that once we understand the mind enough to do any of this, it will turn out the easiest to just do the extraction part and then simulate the mind on a computer, instead of implanting it into a new body. Eliminate the wetware, and gain the benefit of regular backups, copious copies, and Moore’s law for increasing effectiveness. Also, this would be ethically much more tractable.
It seems to me this could also be the solution to the unfriendly AI problem. What if the AI are us? Then yielding the world to them would not be so much of a problem, suddenly.
I would expect recreating a mind from interviews and memoirs to be about as accurate as building a car based on interviews and memoirs written by someone who had driven cars. which is to say, the part of our mind that talks and writes is not noted for its brilliant and detailed insight into how the vast majority of the mind works.
Good point.
I suppose it boils down to what you include when you say “mind”. I think the part of our mind that talks and writes is not very different from the part that thinks. So, if you narrowly, but reasonably, define the “mind” as only the conscious, thinking part of our personality, it might not be so farfetched to think a reasonable reconstruction of it from writings is possible.
Thought and language are closely related. Ask yourself: How many of my thoughts could I put into language, given a good effort? My gut feeling is “most of them”, but I could be wrong. The same goes for memories. If a memory can not be expressed, can it even be called a memory?