Copy my mind to a machine non-destructively, and I still identify with meat-me. You could let machine-me run for a day, or a week, or a year, and only then kill off meat-me. I don’t like that option and would be confused by someone who did.
This is just bizarre. If the point is to preserve continuity, why on earth would you let the copy run independently and diverge? Of course it won’t then represent a continuation of experience from the point at which meat-you was later killed.
The point of the destructive upload is precisely so that you-now can anticipate continuing only as the upload. It’s essentially a charitable act! I don’t want any me to have the experience “dammit, I’m still the meat one”.
The point of the destructive upload is precisely so that you-now can anticipate continuing only as the upload.
Except I don’t anticipate continuing only as the upload; I anticipate being dead. Uploaded-me will remember incorrectly anticipating the same, but uploaded-me was not the one doing the anticipating.
I am an instance of a class, not the class itself.
Actually, tabooing “I”, since it seems to be getting in the way: This instance of the class anticipates that this instance will be dead, and has a problem with that even if other instance(s) of the class remain.
There is one instance. It forks. There are two instances. If you claim that one of them was the original instance, you are using “I”.
I’d say that past!you, upload!you, and meat!you are three distinct instances of the class you. Thinking you’re going to die means that past!you does not believe that there is a future!you.
Well, yes. What I was trying to do was avert a confusion where “I” might refer to an instance (meat brain, silicon RAM copy #224) or might refer to a class (the abstract computation they’re both running), by specifying the intended meaning. That is the point of tabooing, right?
I’d say that past!you, upload!you, and meat!you are three distinct instances of the class you.
Thanks; this seems to be the source of disagreement. I see two instances, not three, the second forking at the moment of upload and running the same program with a new PID. I don’t think we’ll find common ground here, and I’m not sure I’m up to defending my position on the subject. I just found the consequences of that position interesting to explore.
Uploaded-me will remember incorrectly anticipating the same, but uploaded-me was not the one doing the anticipating.
Fine, but we’re still talking past each other, because I think there is no sense in which dead meat-you “was” the one doing the anticipating that is not also true of live upload-you.
I am an instance of a class, not the class itself.
So the whole point of this, as I understood it, was that the universe doesn’t support persistent instances in the way you want it to.
You could follow e.g. Mitchell Porter (as far as I understood him) and claim that there’s a particular quantum doohickey that does support real fundamental continuity of stuff. Do you? Or am I wildly misinterpreting you?
For the record, (this instance of this class) has no problems with the destruction of (other instances of this class or itself), so long as at least one instance remains and is viable, and a handful of similarity conditions are met.
Seriously. We can talk about it on lw chat some time if you’re bored.
Perhaps I’m missing yours. Say P1 = my continued existence in a meat body, and P2 = my continued existence in an uploaded body. It seems clear to me that I prefer P1 to NOT P1… that’s why I don’t kill myself, for example. So why would I prefer P2 to (P2 AND P1)?
Ah. But I’m not making an argument, just reporting current preferences.
If P2 can happen, that changes my preference for P1 over NOT P1, except in the case where P1 can also extend indefinitely; due to e.g. advances in anti-aging science, or alicornication science.
I strongly disvalue any me dying without being able to legitimately (by the lights of my model) anticipate then waking up as an upload. The best way to avoid this scenario is with a destructive upload. Obviously, my enthusiasm for this in any real situation involves a tradeoff between my confidence in the upload process, what I expect life as an upload to be like, and my remaining expected QUALYs.
I can imagine creating multiple P2s via non-destructive uploads before that point, but there will always be one meat-me left over. What I want is that he have no further experiences after whatever turns out to be his last save point, in which time to have the possibly silly, but inevitable and emotionally compelling thought, “I’ve missed the boat.”
There’s no reason that this should be compelling to you, but do you think it’s actually inconsistent?
I found this enlightening, in that I’d never really understood the point of deliberate destructive uploading until now.
A preference report in return: I would strongly prefer P1 over P2, mildly prefer P2 & P1 over just P1, and moderately prefer P2 over nothing (call nothing P0). I think of destructive-upload as death, but I think I value the algorithm I’m running somewhat, even in the absence of the hardware currently running it.
Given the opportunity to do a non-destructive upload, I would certainly take it. Given the opportunity to do a destructive upload, I would....well, I might take it anyway. Not because it wouldn’t be death, but because not taking it would eventually result in P0. I would prefer such an upload to take place as late in life as possible, assuming the possibility of early death by accident is ignored. (I am not certain which way I would go if it was not ignored)
Fair enough. Sure, if you happen to have both the desire to not die and the independent desire to stop living in your meat body once you’ve been uploaded, then a destructive upload gives you more of what you want than a nondestructive one.
This is just bizarre. If the point is to preserve continuity, why on earth would you let the copy run independently and diverge? Of course it won’t then represent a continuation of experience from the point at which meat-you was later killed.
The point of the destructive upload is precisely so that you-now can anticipate continuing only as the upload. It’s essentially a charitable act! I don’t want any me to have the experience “dammit, I’m still the meat one”.
Except I don’t anticipate continuing only as the upload; I anticipate being dead. Uploaded-me will remember incorrectly anticipating the same, but uploaded-me was not the one doing the anticipating.
I am an instance of a class, not the class itself.
Actually, tabooing “I”, since it seems to be getting in the way: This instance of the class anticipates that this instance will be dead, and has a problem with that even if other instance(s) of the class remain.
That’s the same as “I”.
There is one instance. It forks. There are two instances. If you claim that one of them was the original instance, you are using “I”.
I’d say that past!you, upload!you, and meat!you are three distinct instances of the class you. Thinking you’re going to die means that past!you does not believe that there is a future!you.
Well, yes. What I was trying to do was avert a confusion where “I” might refer to an instance (meat brain, silicon RAM copy #224) or might refer to a class (the abstract computation they’re both running), by specifying the intended meaning. That is the point of tabooing, right?
Thanks; this seems to be the source of disagreement. I see two instances, not three, the second forking at the moment of upload and running the same program with a new PID. I don’t think we’ll find common ground here, and I’m not sure I’m up to defending my position on the subject. I just found the consequences of that position interesting to explore.
Fine, but we’re still talking past each other, because I think there is no sense in which dead meat-you “was” the one doing the anticipating that is not also true of live upload-you.
So the whole point of this, as I understood it, was that the universe doesn’t support persistent instances in the way you want it to.
You could follow e.g. Mitchell Porter (as far as I understood him) and claim that there’s a particular quantum doohickey that does support real fundamental continuity of stuff. Do you? Or am I wildly misinterpreting you?
For the record, (this instance of this class) has no problems with the destruction of (other instances of this class or itself), so long as at least one instance remains and is viable, and a handful of similarity conditions are met.
Seriously. We can talk about it on lw chat some time if you’re bored.
Wait, what?
Are you regularly having the experience “dammit, I’m still the meat one” now?
Well, no, because I don’t remember recently non-destructively uploading myself.
Am I missing your point?
Perhaps I’m missing yours.
Say P1 = my continued existence in a meat body, and P2 = my continued existence in an uploaded body.
It seems clear to me that I prefer P1 to NOT P1… that’s why I don’t kill myself, for example.
So why would I prefer P2 to (P2 AND P1)?
Ah. But I’m not making an argument, just reporting current preferences.
If P2 can happen, that changes my preference for P1 over NOT P1, except in the case where P1 can also extend indefinitely; due to e.g. advances in anti-aging science, or alicornication science.
I strongly disvalue any me dying without being able to legitimately (by the lights of my model) anticipate then waking up as an upload. The best way to avoid this scenario is with a destructive upload. Obviously, my enthusiasm for this in any real situation involves a tradeoff between my confidence in the upload process, what I expect life as an upload to be like, and my remaining expected QUALYs.
I can imagine creating multiple P2s via non-destructive uploads before that point, but there will always be one meat-me left over. What I want is that he have no further experiences after whatever turns out to be his last save point, in which time to have the possibly silly, but inevitable and emotionally compelling thought, “I’ve missed the boat.”
There’s no reason that this should be compelling to you, but do you think it’s actually inconsistent?
I found this enlightening, in that I’d never really understood the point of deliberate destructive uploading until now.
A preference report in return: I would strongly prefer P1 over P2, mildly prefer P2 & P1 over just P1, and moderately prefer P2 over nothing (call nothing P0). I think of destructive-upload as death, but I think I value the algorithm I’m running somewhat, even in the absence of the hardware currently running it.
Given the opportunity to do a non-destructive upload, I would certainly take it. Given the opportunity to do a destructive upload, I would....well, I might take it anyway. Not because it wouldn’t be death, but because not taking it would eventually result in P0. I would prefer such an upload to take place as late in life as possible, assuming the possibility of early death by accident is ignored. (I am not certain which way I would go if it was not ignored)
Fair enough. Sure, if you happen to have both the desire to not die and the independent desire to stop living in your meat body once you’ve been uploaded, then a destructive upload gives you more of what you want than a nondestructive one.