I agree that uploading is copying-then-death. I think you’re basically correct with your thought experiment, but your worries about vagueness are unfounded. The appropriate question is what counts as death? Consider the following two scenarios: 1. A copy of you is stored on a supercomputer and you’re then obliterated in a furnace. 2. A procedure is being performed on your brain, you’re awake the entire time, and you remain coherent throughout. In scenario 1 we have a paradigmatic example of death: obliteration in a furnace. In scenario 2 we have a paradigmatic example of surviving an operation without harm. I would say that, if the procedure in 2 involves replacing all or part of your brain, whether it is performed swiftly or slowly is unimportant. Moreover, even if you lost consciousness, it would not be death; people can lose consciousness without any harm coming to them.
Note that you can adjust the first scenario—say, by insisting that the copy is made at the instant of death or that the copying process is destructive as it transpires, or whatever—but the scenario still could go as described. That is, we are supposed to believe that the copy is a continuation of the person despite the possibility of inserting paradigmatic examples of death into the process. This is a clear case of simply stipulating that ‘death’ (and ‘survival’) should mean something entirely different. You can’t hold that you’re still speaking about survival, when you insist on surviving any number of paradigmatic cases of death (such as being obliterated in a furnace). There are no forms of death—as we ordinarily conceive of death—that cannot be inserted into the uploading scenario. So we have here as clear a case of something that cannot count as survival as is possible to have. Anybody who argues otherwise is not arguing for survival, but stipulating a new meaning for the words ‘survival’, ‘death’, etc. That’s fine, but they’re still dead, they’re just not ‘dead’.
I think this realisation makes understanding something like the brain transplant you describe a little easier. For we can say that we are living so long as we don’t undergo anything that would count as dying (which is just to say that we don’t die). There’s nothing mysterious about this. We don’t need to go looking for the one part of the body that maintains our identity under transformation, or start reifying information into a pseudo-soul, or whatever. We just need to ensure whatever we do doesn’t count as death (as ordinarily conceived). Now, in the case of undergoing an operation, there are clear guidelines. We need to maintain viability. I cannot do certain things to you and keep you alive, unless I perform certain interventions. So I think the answer is quite simple: I can do anything to you—make any change—as long as I can keep you alive throughout the process. I can replace your whole brain, as long as you remain viable throughout the process, and you’ll still be alive at the end of it. You will, of course, be ‘brain dead’ unless I maintain certain features of your nervous system too. But this isn’t mysterious either; it’s just that I need to maintain certain structural features of your nervous system to avoid permanent loss of faculties (such as motor control, memory, etc). Replacement with an artificial nervous system is likewise unproblematic, as long as it maintains these important faculties.
A lot of the confusion here comes from unnecessary reification. For example, that the nervous system must be kept structurally intact to maintain certain faculties, does not mean that it somehow ‘contains’ those faculties. You can replace it at will, so long as you can keep the patient alive. The person is not ‘in’ the structure (or the material), but the structure is a prerequisite for maintaining certain faculties. The common mistake here is thinking that we must be the structure (or pattern) if we’re not the material, but neither claim makes sense. Alternatively, say you have a major part of your brain replaced, and the match is not exact. Somebody might, for example, point out that your personality has changed. Horrified, you might wonder, “Am I still me?” But this question is clearly absurd. There is no sense in which you could ask if you are still you. Nor can you coherently ask, “Did I die on the operating table?” Now, you might ask whether you merely came into existence on the operating table, after the original died, etc. But this, too, is nonsense. It assumes a reified concept of “self” or “identity.” There is nothing you can “lose” that would count as a prior version of you ‘dying’ and your being born anew (whether slightly different or not). Of course, there are such things as irreversible mental degradation, dementia, etc. These are tragic and we rightfully speak of a loss of identity, but there’d be no such tragedy in a bout of dementia. A temporary loss of identity is not a loss of identity followed by gaining a new identity; it’s a behavioural aberration. A temporary loss of identity with a change in temperament when one recovers is, likewise, unproblematic in this sense; we undergo changes in temperament regardless. Of course, extreme change can bring with it questions of loss of identity, but this is no more problematic for our scenario than an operation gone wrong. “He never fully recovered from his operation,” we might say. Sad, yes, but this type of thing happens even outside of thought experiments.
You are dodging the question by appealing to the dictionary. The dictionary will not prove for you that identity is tied to your body, which is the issue at hand (not “whether your body dies as the result of copying-then-death”, which as you point out is trivial)
All true, but it just strengthens the case for what you call “stipulating a new meaning for the words ‘survival’, ‘death’, etc”. Or perhaps, making up new words to replace those. Contemplating cases like these makes me realize that I have stopped caring about ‘death’ in its old exact meaning. In some scenarios “this will kill you” becomes a mere technicality.
Mere stipulation secures very little though. Consider the following scenario: I start wearing a medallion around my neck and stipulate that, so long as these medallion survives intact, I am to be considered alive, regardless of what befalls me. This is essentially equivalent to what you’d be doing in stipulating survival in the uploading scenario. You’d secure ‘survival’, perhaps, but the would-be uploader has a lot more work to do. You need also to stipulate that when the upload says “On my 6th birthday...” he’s referring to your 6th birthday, etc. I think this project will prove much more difficult. In general, these sort of uploading scenarios are relying on the notion of something being “transferred” from the person to the upload, and it’s this that secures identity and hence reference. But if you’re willing to concede that nothing is transferred—that identity isn’t transferrable—then you’ve got a lot of work to do in order to make the uploading scenario consistent. You’ve got to introduce revised versions of concepts of identity, memory, self-reference, etc. Doing so consistently is likely a formidable task.
I should have said this about the artificial brain transplant scenario too. While I think the scenario makes sense, it doesn’t secure all the traditional science fiction consequences. So having an artificial brain doesn’t automatically imply you can be “resleeved” if your body is destroyed, etc. Such scenarios tend to involve transferrable identity, which I’m denying. You can’t migrate to a server and live a purely software existence; you’re not now “in” the software. You can see the problems of reference in this scenario. For example, say you had a robot on Mars with an artificial brain with the same specifications as your own. You want to visit Mars, so you figure you’ll just transfer the software running on your artificial brain to the robot and wake up on Mars. But again, this assumes identity is transferrable in some sense, which it is not. But you might think that this doesn’t matter. You don’t care if it’s you on Mars, you’ll just send your software and bring it back, and then you’ll have the memories of being on Mars. This is where problems of reference come in, because “When I was on Mars...” would be false. You’d have at best a set of false memories. This might not seem like a problem, you’ll just compartmentalise the memories, etc. But say the robot fell in love on Mars. Can you truly compartmentalise that? Memories aren’t images you have stored away that you can examine dispassionately, they’re bound up with who you are, what you do, etc. You would surely gain deeply confused feelings about another person, engage in irrational behaviour, etc. This would be causing yourself a kind of harm; introducing a kind of mental illness.
Now, say you simply begin stipulating “by ‘I’ I mean...”, etc, until you’ve consistently rejiggered the whole conceptual scheme to get the kind of outcome the uploader wants. Could you really do this without serious consequences for basic notions of welfare, value, etc? I find this hard to believe. The fact that the Mars scenario abuts issues of value and welfare suggests that introducing new meanings here would also involve stipulating new meanings for these concepts. This then leads to a potential contradiction: it might not be rationally possible to engage in this kind of revisionary task. That is, from your current position, performing such a radical revision would probably count as harmful, damaging to welfare, identity destroying, etc. What does this say about the status of the revisionary project? Perhaps the revisionist would say, “From my revisionary perspective, nothing I have done is harmful.” But for everyone else, he is quite mad. Although I don’t have a knockdown argument against it, I wonder if this sort of revisionary project is possible at all, given the strangeness of having two such unconnected bubbles of rationality.
Now, say you simply begin stipulating “by ‘I’ I mean...”, etc, until you’ve consistently rejiggered the whole conceptual scheme to get the kind of outcome the uploader wants. Could you really do this without serious consequences for basic notions of welfare, value, etc?
No, and that is the point. There are serious drawbacks of the usual notions of welfare, at least in the high-tech future we are discussing, and they need serious correcting. Although, as I mentioned earlier, coining new words for the new concepts would probably facilitate communication better, especially when revisionaries and conservatives converse. So maybe “Yi” could be the pronoun for “miy” branching future, in which Yi go to Mars as well as staying home, to be merged later. There is no contradiction, either: my welfare is what I thought I cared about in a certain constellation of cares, but now Yi realize that was a mistake. Misconceptions of what we truly desire or like are, of course, par for the course for human beings; and so are corrections of those conceptions.
I agree that uploading is copying-then-death. I think you’re basically correct with your thought experiment, but your worries about vagueness are unfounded. The appropriate question is what counts as death? Consider the following two scenarios: 1. A copy of you is stored on a supercomputer and you’re then obliterated in a furnace. 2. A procedure is being performed on your brain, you’re awake the entire time, and you remain coherent throughout. In scenario 1 we have a paradigmatic example of death: obliteration in a furnace. In scenario 2 we have a paradigmatic example of surviving an operation without harm. I would say that, if the procedure in 2 involves replacing all or part of your brain, whether it is performed swiftly or slowly is unimportant. Moreover, even if you lost consciousness, it would not be death; people can lose consciousness without any harm coming to them.
Note that you can adjust the first scenario—say, by insisting that the copy is made at the instant of death or that the copying process is destructive as it transpires, or whatever—but the scenario still could go as described. That is, we are supposed to believe that the copy is a continuation of the person despite the possibility of inserting paradigmatic examples of death into the process. This is a clear case of simply stipulating that ‘death’ (and ‘survival’) should mean something entirely different. You can’t hold that you’re still speaking about survival, when you insist on surviving any number of paradigmatic cases of death (such as being obliterated in a furnace). There are no forms of death—as we ordinarily conceive of death—that cannot be inserted into the uploading scenario. So we have here as clear a case of something that cannot count as survival as is possible to have. Anybody who argues otherwise is not arguing for survival, but stipulating a new meaning for the words ‘survival’, ‘death’, etc. That’s fine, but they’re still dead, they’re just not ‘dead’.
I think this realisation makes understanding something like the brain transplant you describe a little easier. For we can say that we are living so long as we don’t undergo anything that would count as dying (which is just to say that we don’t die). There’s nothing mysterious about this. We don’t need to go looking for the one part of the body that maintains our identity under transformation, or start reifying information into a pseudo-soul, or whatever. We just need to ensure whatever we do doesn’t count as death (as ordinarily conceived). Now, in the case of undergoing an operation, there are clear guidelines. We need to maintain viability. I cannot do certain things to you and keep you alive, unless I perform certain interventions. So I think the answer is quite simple: I can do anything to you—make any change—as long as I can keep you alive throughout the process. I can replace your whole brain, as long as you remain viable throughout the process, and you’ll still be alive at the end of it. You will, of course, be ‘brain dead’ unless I maintain certain features of your nervous system too. But this isn’t mysterious either; it’s just that I need to maintain certain structural features of your nervous system to avoid permanent loss of faculties (such as motor control, memory, etc). Replacement with an artificial nervous system is likewise unproblematic, as long as it maintains these important faculties.
A lot of the confusion here comes from unnecessary reification. For example, that the nervous system must be kept structurally intact to maintain certain faculties, does not mean that it somehow ‘contains’ those faculties. You can replace it at will, so long as you can keep the patient alive. The person is not ‘in’ the structure (or the material), but the structure is a prerequisite for maintaining certain faculties. The common mistake here is thinking that we must be the structure (or pattern) if we’re not the material, but neither claim makes sense. Alternatively, say you have a major part of your brain replaced, and the match is not exact. Somebody might, for example, point out that your personality has changed. Horrified, you might wonder, “Am I still me?” But this question is clearly absurd. There is no sense in which you could ask if you are still you. Nor can you coherently ask, “Did I die on the operating table?” Now, you might ask whether you merely came into existence on the operating table, after the original died, etc. But this, too, is nonsense. It assumes a reified concept of “self” or “identity.” There is nothing you can “lose” that would count as a prior version of you ‘dying’ and your being born anew (whether slightly different or not). Of course, there are such things as irreversible mental degradation, dementia, etc. These are tragic and we rightfully speak of a loss of identity, but there’d be no such tragedy in a bout of dementia. A temporary loss of identity is not a loss of identity followed by gaining a new identity; it’s a behavioural aberration. A temporary loss of identity with a change in temperament when one recovers is, likewise, unproblematic in this sense; we undergo changes in temperament regardless. Of course, extreme change can bring with it questions of loss of identity, but this is no more problematic for our scenario than an operation gone wrong. “He never fully recovered from his operation,” we might say. Sad, yes, but this type of thing happens even outside of thought experiments.
You are dodging the question by appealing to the dictionary. The dictionary will not prove for you that identity is tied to your body, which is the issue at hand (not “whether your body dies as the result of copying-then-death”, which as you point out is trivial)
All true, but it just strengthens the case for what you call “stipulating a new meaning for the words ‘survival’, ‘death’, etc”. Or perhaps, making up new words to replace those. Contemplating cases like these makes me realize that I have stopped caring about ‘death’ in its old exact meaning. In some scenarios “this will kill you” becomes a mere technicality.
Mere stipulation secures very little though. Consider the following scenario: I start wearing a medallion around my neck and stipulate that, so long as these medallion survives intact, I am to be considered alive, regardless of what befalls me. This is essentially equivalent to what you’d be doing in stipulating survival in the uploading scenario. You’d secure ‘survival’, perhaps, but the would-be uploader has a lot more work to do. You need also to stipulate that when the upload says “On my 6th birthday...” he’s referring to your 6th birthday, etc. I think this project will prove much more difficult. In general, these sort of uploading scenarios are relying on the notion of something being “transferred” from the person to the upload, and it’s this that secures identity and hence reference. But if you’re willing to concede that nothing is transferred—that identity isn’t transferrable—then you’ve got a lot of work to do in order to make the uploading scenario consistent. You’ve got to introduce revised versions of concepts of identity, memory, self-reference, etc. Doing so consistently is likely a formidable task.
I should have said this about the artificial brain transplant scenario too. While I think the scenario makes sense, it doesn’t secure all the traditional science fiction consequences. So having an artificial brain doesn’t automatically imply you can be “resleeved” if your body is destroyed, etc. Such scenarios tend to involve transferrable identity, which I’m denying. You can’t migrate to a server and live a purely software existence; you’re not now “in” the software. You can see the problems of reference in this scenario. For example, say you had a robot on Mars with an artificial brain with the same specifications as your own. You want to visit Mars, so you figure you’ll just transfer the software running on your artificial brain to the robot and wake up on Mars. But again, this assumes identity is transferrable in some sense, which it is not. But you might think that this doesn’t matter. You don’t care if it’s you on Mars, you’ll just send your software and bring it back, and then you’ll have the memories of being on Mars. This is where problems of reference come in, because “When I was on Mars...” would be false. You’d have at best a set of false memories. This might not seem like a problem, you’ll just compartmentalise the memories, etc. But say the robot fell in love on Mars. Can you truly compartmentalise that? Memories aren’t images you have stored away that you can examine dispassionately, they’re bound up with who you are, what you do, etc. You would surely gain deeply confused feelings about another person, engage in irrational behaviour, etc. This would be causing yourself a kind of harm; introducing a kind of mental illness.
Now, say you simply begin stipulating “by ‘I’ I mean...”, etc, until you’ve consistently rejiggered the whole conceptual scheme to get the kind of outcome the uploader wants. Could you really do this without serious consequences for basic notions of welfare, value, etc? I find this hard to believe. The fact that the Mars scenario abuts issues of value and welfare suggests that introducing new meanings here would also involve stipulating new meanings for these concepts. This then leads to a potential contradiction: it might not be rationally possible to engage in this kind of revisionary task. That is, from your current position, performing such a radical revision would probably count as harmful, damaging to welfare, identity destroying, etc. What does this say about the status of the revisionary project? Perhaps the revisionist would say, “From my revisionary perspective, nothing I have done is harmful.” But for everyone else, he is quite mad. Although I don’t have a knockdown argument against it, I wonder if this sort of revisionary project is possible at all, given the strangeness of having two such unconnected bubbles of rationality.
No, and that is the point. There are serious drawbacks of the usual notions of welfare, at least in the high-tech future we are discussing, and they need serious correcting. Although, as I mentioned earlier, coining new words for the new concepts would probably facilitate communication better, especially when revisionaries and conservatives converse. So maybe “Yi” could be the pronoun for “miy” branching future, in which Yi go to Mars as well as staying home, to be merged later. There is no contradiction, either: my welfare is what I thought I cared about in a certain constellation of cares, but now Yi realize that was a mistake. Misconceptions of what we truly desire or like are, of course, par for the course for human beings; and so are corrections of those conceptions.