There is information you have about yourself that other people do not.
Yes. I was quite careful about the wording there. I’m not saying that whoever comes out of the uploading procedure will necessarily be you. I’m saying that if it doesn’t act like you, they’re going to notice. And if it doesn’t feel on the inside like it used to feel on the inside, then unless the capacity to notice this feeling on the inside was also erased, the upload is going to notice.
So, for this scenario:
Something needs to be missing on the inside—something important to human value.
This thing needs to be non-noticeable from the outside under any degree of examination
This thing needs to be non-noticeable from the inside under any degree of examination
You know… I think you’re right that the uploads themselves at least would notice.
I think the risk of non-philosophical zombies is a smaller one than the other risk I mentioned having to do with continuity—a copy of you (no matter how accurate) no longer being you after the two of you diverge, since you no longer have access to each other’s minds.
My speculating that your copy might also be a zombie only muddies the waters.
I don’t have access to the mind of me one second in the future or past either, so I don’t put much stock in continuity as something that I stand to lose.
You have access to your future mind in the sense that it is an evolution of your current mind. Your copy’s future mind is an evolution of your copy’s current mind, not yours.
Perhaps this tight causal link is what makes me care more about the mes that will branch off in the future more than I care about the past me of which I am a branch. Perhaps I would see a copy of myself as equivalent to me if we had at least sporadic direct access to each other’s mind states. So my skepticism toward immortality-through-backup-copies is not unconditional.
I don’t put much stock in continuity as something that I stand to lose.
You might not put much stock into that, and you might also be rationalizing away your basic will to live. What do you stand to lose?
You have access to your future mind in the sense that it is an evolution of your current mind. Your copy’s future mind is an evolution of your copy’s current mind, not yours.
My copy’s future mind is an evolution of me pre-copy’s current mind, and correlates overwhelmingly for a fairly long time after the copy was made. That means that making the copy is good for all me’s pre-copy and to some (large) degree even post-copy. I’d certainly be more willing to take risks if I had a backup. After all, what do I stand to lose? A few days of memory?
(I don’t see any situation, basically, crippling computing scarcity aside, in which I would be better off not uploading.)
To clarify: I don’t put stock into single-instance continuity. I want the future to have me’s in it, I don’t particularly care what their substrate is, or if they’re second-to-second continuous.
For what it’s worth, I agreed with your position for years, but changed my opinion after Wei Dai suggested a new argument to me.
Suppose you have an upload saying “I’m conscious”. You start optimizing the program, step by little step, until you get a tiny program that just outputs the string “I’m conscious” without actually being conscious. How can we tell at which point the program lost consciousness? And if we can’t tell, then why are we sure that the process of scanning and uploading a biological brain doesn’t have similar problems? Especially if the uploading is done by an AI who might want to fit more people into the universe.
Moreover, it notices that the slope is slippery at the very very bottom after all introspective capability has been lost, but no argument is provided about the top, AND you’re applying it to a single-step procedure with easy before/after comparison, so we can’t get a boiled-frog effect.
Overeager optimization is a serious concern once digitized, for sure.
The transition is the one in the OP—the digitization process itself, going from meat to, well, not-meat.
You only need to do that once.
The comparison would be by behavior—do they think differently, beyond what you’d expect from differing circumstances? Do they still seem human enough? Unless it is all very sudden, there will be plenty of time to notice inhumanity in the uploads.
Goes doubly if they can be placed in convincing androids, so the circumstances differ as little as possible.
Yes. I was quite careful about the wording there. I’m not saying that whoever comes out of the uploading procedure will necessarily be you. I’m saying that if it doesn’t act like you, they’re going to notice. And if it doesn’t feel on the inside like it used to feel on the inside, then unless the capacity to notice this feeling on the inside was also erased, the upload is going to notice.
So, for this scenario:
Something needs to be missing on the inside—something important to human value.
This thing needs to be non-noticeable from the outside under any degree of examination
This thing needs to be non-noticeable from the inside under any degree of examination
this is a very heavy conjunction.
You know… I think you’re right that the uploads themselves at least would notice.
I think the risk of non-philosophical zombies is a smaller one than the other risk I mentioned having to do with continuity—a copy of you (no matter how accurate) no longer being you after the two of you diverge, since you no longer have access to each other’s minds.
My speculating that your copy might also be a zombie only muddies the waters.
I don’t have access to the mind of me one second in the future or past either, so I don’t put much stock in continuity as something that I stand to lose.
You have access to your future mind in the sense that it is an evolution of your current mind. Your copy’s future mind is an evolution of your copy’s current mind, not yours.
Perhaps this tight causal link is what makes me care more about the mes that will branch off in the future more than I care about the past me of which I am a branch. Perhaps I would see a copy of myself as equivalent to me if we had at least sporadic direct access to each other’s mind states. So my skepticism toward immortality-through-backup-copies is not unconditional.
You might not put much stock into that, and you might also be rationalizing away your basic will to live. What do you stand to lose?
My copy’s future mind is an evolution of me pre-copy’s current mind, and correlates overwhelmingly for a fairly long time after the copy was made. That means that making the copy is good for all me’s pre-copy and to some (large) degree even post-copy. I’d certainly be more willing to take risks if I had a backup. After all, what do I stand to lose? A few days of memory?
(I don’t see any situation, basically, crippling computing scarcity aside, in which I would be better off not uploading.)
To clarify: I don’t put stock into single-instance continuity. I want the future to have me’s in it, I don’t particularly care what their substrate is, or if they’re second-to-second continuous.
For what it’s worth, I agreed with your position for years, but changed my opinion after Wei Dai suggested a new argument to me.
Suppose you have an upload saying “I’m conscious”. You start optimizing the program, step by little step, until you get a tiny program that just outputs the string “I’m conscious” without actually being conscious. How can we tell at which point the program lost consciousness? And if we can’t tell, then why are we sure that the process of scanning and uploading a biological brain doesn’t have similar problems? Especially if the uploading is done by an AI who might want to fit more people into the universe.
That answers problems 1 and 3, but not problem 2.
Moreover, it notices that the slope is slippery at the very very bottom after all introspective capability has been lost, but no argument is provided about the top, AND you’re applying it to a single-step procedure with easy before/after comparison, so we can’t get a boiled-frog effect.
Overeager optimization is a serious concern once digitized, for sure.
Sorry, what before/after comparison are you thinking of?
The transition is the one in the OP—the digitization process itself, going from meat to, well, not-meat.
You only need to do that once.
The comparison would be by behavior—do they think differently, beyond what you’d expect from differing circumstances? Do they still seem human enough? Unless it is all very sudden, there will be plenty of time to notice inhumanity in the uploads.
Goes doubly if they can be placed in convincing androids, so the circumstances differ as little as possible.