Suppose the copy is not a perfect replication. Suppose you can emulate your brain with 90% accuracy cheaply and that each percentage increase in accuracy requires 2X more computing power. This makes the issue not one of “magical thinking” (supposing that a software (not substrate) replica that is 100% accurate is different from the exact same software on a different substrate) to a question of whether a simulation that is 90% accurate is “good enough”.
Of course, “good enough” is a vague phrase, so it’s necessary to determine how we should evaluate the quality of a replica. I can think of a few off the top of my head (speed of emulation, similarity of behavioral responses to similar situations). It certainly makes for some puzzling philosophy over the nature of identity.
I do not ascribe any magical properties to “my substrate”, however I think it’s extremely foolish to think of the mind and body as something separate. The mind is a process of the body at least from my understanding of contemporary cognitive science. My body is my mind is another way to put it. I’m all for radical technology but I think mind uploading is the most ludicrous, weak and underwhelming of transhumanist thought (speaking as an ardent transhumanist).
That’s because I don’t understand his point? I’d wager though that it implies that simulations of a mind are themselves minds with subjective experience. In which case we’d have problems.
Then you should be asking him more questions, not replying with dogma which begs the question; for example, is a ‘simulation’ of arithmetic also arithmetic? Then your formula would have been refuted.
I agree that Dave is partially implemented by a brain and partially implemented by a non-brain body. I would also say that Dave is partially implemented by a social structure, and partially implemented by various physical objects outside my body.
If Dave undergoes a successful “mind upload,” we have successfully implemented Dave on a different platform. We can ask, then, how much of Dave’s existing implementation in each system needs to be re-implemented in the new platform in order for the resulting entity to be Dave. We can also ask how much of the new platform implementation of Dave is unique to Dave, and how much of it is essentially identical for every other “uploaded” human.
Put a different way: if we already have a generic human template installed on our target platform, how much of Dave’s current implementation do we need to port over in order to preserve Dave? I suspect it’s a pretty vanishingly small amount, actually, and I expect that >99% of it is stored in my brain.
I agree that the brain is a subset of the body, and I agree that a not-insignificant portion of “mind” is implemented in parts of the body other than the brain, but I don’t think this means anything in particular about the viability of mind uploads.
I can’t disagree that there are no parts of the body/brain that aren’t amenable i.e. non magical and thus capable of emulation. I guess where I’m having trouble is with 1) the application and 2) how and where do you draw the line in the physical workings of the body that are insignificant to the phenomenon of mind. What colours my think on this are people like von Uexkuell in the sense that what encapsulates our cognition is how we function as animals.
I’m not sure what you mean by the application. Implementing the processes we identify as Dave on a different platform is a huge engineering challenge, to be sure, and nobody knows how to do it yet, so if that’s what you mean you are far from being alone in having trouble with that part.
As for drawing the line, as I tried to say originally, I draw it in terms of analyzing variance. If Dave is implemented on some other platform, Dave will still have a body, although it will be a different body than the one Dave has now. The question then becomes, how much difference does that make?
If we come to function differently as animals, or if we come to function as something other than animals, our cognition will be encapsulated in different ways, certainly, but whether we should care or not depends quite a bit on what we value about our current encapsulation.
Where’s the attribution of magic? You just have a different semantics of “conscious”, “pain”, “pleasure” and so on than they do. They hold that it applies to a narrower range of things, you hold that it applies to a wider range. Why is one semantics more “magical thinking” than the other?
Suppose we’re arguing about “muscle cars”, and I insist that a “muscle car” by definition has to have an internal combustion engine. You say that electric cars can be muscle cars too. Our friend says that neither definition is true nor false, and that the historical record of muscle cars and their enthusiasts is insufficient to settle the matter. Our friend might be right, in which case we both may be guilty of magical thinking—but not just one of us. Our friend might be wrong, in which case one of us may be mistaken, but that’s different.
The reference to Searle clearly classifies this as a zombie argument: it hinges on consciousness.
The “not you” argument makes no sense at all—if we are positing a fully conscious, fully intelligent entity which shares all of my memories, all of my preferences, all of my dispositions, all of my projects; which no person, even my wife or children, would be able to tell apart from the meat-me; but which nevertheless is not me.
The rare neurological syndrome known as Capgras delusion illustrates why words like “me” or “self” carry such a mysterious aura: the sense of someone’s identity is the result of not one but several computations carried out in different parts of the human brain, which sometimes get out of step resulting in weird distortions of identity-perception.
But to the extent that “self” is a non-mysterious notion associated with being possessed of a certain set of memories, of future plans and of dispositions, our biological selves already become “someone other than they are” quite naturally with the passage of time; age and experience turn you into someone with a slightly different set of memories, plans and dispositions.
In that sense, uploading and aging are not fundamentally different processes, and any argument which applies to one applies to the other as far as the preservation of “self” is concerned.
Well, there can be a question of what rate of disposition change is consistent with being still the same person. About telling apart—well if someone cannot tell a computer program and animal apart, they have a trouble.
It looks like currently humanity can learn a lot about concept of self but seems to be a bit afraid to try by medically temporarily freezing inter-hemisphere link… What would the person remember as “past self” after re-merging?
All this is moot anyway because gradual uploads are as likely to be possible as stop-and-go ones.
Zombies again? Meh.
It astonishes me how many otherwise skeptical people are happy to ascribe magical properties to their substrate.
Suppose the copy is not a perfect replication. Suppose you can emulate your brain with 90% accuracy cheaply and that each percentage increase in accuracy requires 2X more computing power. This makes the issue not one of “magical thinking” (supposing that a software (not substrate) replica that is 100% accurate is different from the exact same software on a different substrate) to a question of whether a simulation that is 90% accurate is “good enough”.
Of course, “good enough” is a vague phrase, so it’s necessary to determine how we should evaluate the quality of a replica. I can think of a few off the top of my head (speed of emulation, similarity of behavioral responses to similar situations). It certainly makes for some puzzling philosophy over the nature of identity.
I do not ascribe any magical properties to “my substrate”, however I think it’s extremely foolish to think of the mind and body as something separate. The mind is a process of the body at least from my understanding of contemporary cognitive science. My body is my mind is another way to put it. I’m all for radical technology but I think mind uploading is the most ludicrous, weak and underwhelming of transhumanist thought (speaking as an ardent transhumanist).
Well, OK, What if we change our pitch from “approximate mind simulation” to “approximate identity-focal body simulation”?
A simulation of X is not X.
That’s not a reply to his point.
That’s because I don’t understand his point? I’d wager though that it implies that simulations of a mind are themselves minds with subjective experience. In which case we’d have problems.
Then you should be asking him more questions, not replying with dogma which begs the question; for example, is a ‘simulation’ of arithmetic also arithmetic? Then your formula would have been refuted.
Bump.
What’s a simulation of arithmetic except just arithmetic? In any case PrometheanFaun what does “approximate identity-focal body simulation” mean?
Accidently retracted:
What’s a simulation of arithmetic except just arithmetic? In any case PrometheanFaun what does “approximate identity-focal body simulation” mean?
I’m not sure that’s the right question to ask.
I agree that Dave is partially implemented by a brain and partially implemented by a non-brain body. I would also say that Dave is partially implemented by a social structure, and partially implemented by various physical objects outside my body.
If Dave undergoes a successful “mind upload,” we have successfully implemented Dave on a different platform. We can ask, then, how much of Dave’s existing implementation in each system needs to be re-implemented in the new platform in order for the resulting entity to be Dave. We can also ask how much of the new platform implementation of Dave is unique to Dave, and how much of it is essentially identical for every other “uploaded” human.
Put a different way: if we already have a generic human template installed on our target platform, how much of Dave’s current implementation do we need to port over in order to preserve Dave? I suspect it’s a pretty vanishingly small amount, actually, and I expect that >99% of it is stored in my brain.
What question was I asking I think you replied to the wrong post. But for what it’s worth brain is a subset of body.
No, I replied to the post I meant to reply to.
I agree that the brain is a subset of the body, and I agree that a not-insignificant portion of “mind” is implemented in parts of the body other than the brain, but I don’t think this means anything in particular about the viability of mind uploads.
I can’t disagree that there are no parts of the body/brain that aren’t amenable i.e. non magical and thus capable of emulation. I guess where I’m having trouble is with 1) the application and 2) how and where do you draw the line in the physical workings of the body that are insignificant to the phenomenon of mind. What colours my think on this are people like von Uexkuell in the sense that what encapsulates our cognition is how we function as animals.
I’m not sure what you mean by the application. Implementing the processes we identify as Dave on a different platform is a huge engineering challenge, to be sure, and nobody knows how to do it yet, so if that’s what you mean you are far from being alone in having trouble with that part.
As for drawing the line, as I tried to say originally, I draw it in terms of analyzing variance. If Dave is implemented on some other platform, Dave will still have a body, although it will be a different body than the one Dave has now. The question then becomes, how much difference does that make?
If we come to function differently as animals, or if we come to function as something other than animals, our cognition will be encapsulated in different ways, certainly, but whether we should care or not depends quite a bit on what we value about our current encapsulation.
Where’s the attribution of magic? You just have a different semantics of “conscious”, “pain”, “pleasure” and so on than they do. They hold that it applies to a narrower range of things, you hold that it applies to a wider range. Why is one semantics more “magical thinking” than the other?
Suppose we’re arguing about “muscle cars”, and I insist that a “muscle car” by definition has to have an internal combustion engine. You say that electric cars can be muscle cars too. Our friend says that neither definition is true nor false, and that the historical record of muscle cars and their enthusiasts is insufficient to settle the matter. Our friend might be right, in which case we both may be guilty of magical thinking—but not just one of us. Our friend might be wrong, in which case one of us may be mistaken, but that’s different.
Nope, not exactly zombies.
Alive and well person—just not you.
The reference to Searle clearly classifies this as a zombie argument: it hinges on consciousness.
The “not you” argument makes no sense at all—if we are positing a fully conscious, fully intelligent entity which shares all of my memories, all of my preferences, all of my dispositions, all of my projects; which no person, even my wife or children, would be able to tell apart from the meat-me; but which nevertheless is not me.
The rare neurological syndrome known as Capgras delusion illustrates why words like “me” or “self” carry such a mysterious aura: the sense of someone’s identity is the result of not one but several computations carried out in different parts of the human brain, which sometimes get out of step resulting in weird distortions of identity-perception.
But to the extent that “self” is a non-mysterious notion associated with being possessed of a certain set of memories, of future plans and of dispositions, our biological selves already become “someone other than they are” quite naturally with the passage of time; age and experience turn you into someone with a slightly different set of memories, plans and dispositions.
In that sense, uploading and aging are not fundamentally different processes, and any argument which applies to one applies to the other as far as the preservation of “self” is concerned.
Well, there can be a question of what rate of disposition change is consistent with being still the same person. About telling apart—well if someone cannot tell a computer program and animal apart, they have a trouble.
It looks like currently humanity can learn a lot about concept of self but seems to be a bit afraid to try by medically temporarily freezing inter-hemisphere link… What would the person remember as “past self” after re-merging?
All this is moot anyway because gradual uploads are as likely to be possible as stop-and-go ones.