Because the notion of “me” is not an ontologically basic category and the question of whether the “real me” wakes up is a question that aught to be un-asked.
I’m a bit confused at the question...you articulated my intent with that sentence perfectly in your other post.
Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently.
and, as TheOtherDave said,
presumably that also helps explain how they can sleep at night.
EDIT: Nevermind, I now understand which part of my statement you misunderstood.
I’m not accepting-but-not-elevating the idea that the ’Real me” doesn’t wake up on the other side. Rather, I’m saying that the questions of personal identity over time do not make sense in the first place. It’s like asking “which color is the most moist”?
You actually continue functioning when you sleep, it’s just that you don’t remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a “different you” waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I’m still a cryonics member.
More troubling is the application to uploading. I haven’t done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.
The root of your philosophical dilemma is that “personal identity” is a conceptual substitution for soul—a subjective thread that connects you over space and time.
No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only “thread’ connecting you to your past experiences is your current subjective experience of remembering them. That’s all.
I always wonder how I should treat my future self if I reject the continuity of self. Should I think of him like a son? A spouse? A stranger? Should I let him get fat? Not get him a degree? Invest in stock for him? Give him another child?
The root of your philosophical dilemma is that “personal identity” is a conceptual substitution for soul—a subjective thread that connects you over space and time.
No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only “thread’ connecting you to your past experiences is your current subjective experience of remembering them. That’s all.
I have a strong subjective experience of moment-to-moment continuity, even if only in the fleeting present. Simply saying “no such thing exists” doesn’t do anything to resolve the underlying confusion. If no such thing as personal identity exists, then why do I experience it? What is the underlying insight that eliminates the question?
This is not an abstract question either. It has huge implications for the construction of timeless decision theory and utilitarian metamorality.
“a strong subjective experience of moment-to-moment continuity” is an artifact of the algorithm your brain implements. It certainly exists in as much as the algorithm itself exists. So does your personal identity. If in the future it becomes possible to run the same algorithm on a different hardware, it will still produce this sense of personal identity and will feel like “you” from the inside.
Yes, I’m not questioning whether a future simulation / emulation of me would have an identical subjective experience. To reject that would be a retreat to epiphenomenalism.
Let me rephrase the question, so as to expose the problem: if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?
Yes, I realize that in both cases result in a computer simulation of Mark in 2063 claiming to have just woken up in the brain scanner, with a subjective feeling of continuity. But is that belief true? In the two situations there’s a very different outcome for the Mark of 2013. If you can’t see that, then I think we are talking about different things, and maybe we should taboo the phrase “personal/subjective identity”.
if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?
Ah, hopefully I’m slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then “wake up in a computer”. You are concerned with… I’m having trouble articulated what exactly… something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year “oblivion”, you wouldn’t be?
Ah, hopefully I’m slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then “wake up in a computer”.
Pretty much correct. To be specific, if computational continuity is what matters, then Mark!2063 has my memories, but was in fact “born” the moment the simulation started, 50 years in the future. That’s when his identity began, whereas mine ended when I died in 2013.
This seems a little more intuitive when you consider switching on 100 different emulations of me at the same time. Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.
You are concerned with… I’m having trouble articulated what exactly… something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year “oblivion”, you wouldn’t be?
No, that would make little difference as it’s pretty clear that physical continuity is an illusion. If pattern or causal continuity were correct, then it’d be fine, but both theories introduce other problems. If computational continuity is correct, then a reconstructed brain wouldn’t be me any more than a simulation would. However it’s possible that my cryogenically vitrified brain would preserve identity, if it were slowly brought back online without interruption.
I’d have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table (until then, it scares the crap out of me). Likewise, a AI or emulation running on a computer that is powered off and then later resumed would also break identity, but depending on the underlying nature of computation & subjective experience, task switching and online suspend/resume may or may not result in cycling identity.
I’ll stop there because I’m trying to formulate all these thoughts into a longer post, or maybe a sequence of posts.
It’s easier to explain in the case of multiple copies of yourself. Imagine the transporter were turned into a replicator—it gets stuck in a loop reconstructing the last thing that went through it, namely you. You step off and turn around to find another version of you just coming out. And then another, and another, etc. Each one of you shares the same memories, but from that moment on you have diverged. Each clone continues life with their own subjective experience until that experience is terminated by that clone’s death.
That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones. You are welcome to suggest a better term.
The replicator / clone thought experiment shows that “subjective experience of identity” is something different from the information pattern that represents your mind. There is something, although at this moment that something is not well defined, which makes you the same “you” that will exist five minutes in the future, but which separates you from the “you”s that walked out of the replicator, or exist in simulation, for example.
The first step is recognizing this distinction. Then turn around and apply it to less fantastical situations. If the clone is “you” but not you (meaning no shared identity, and my apologies for the weak terminology), then what’s to say that a future simulation of “you” would also be you? What about cryonics, will your unfrozen brain still be you? That might depend on what they do to repair damage from vitrification. What about general anesthesia? Again, I need to learn more about how general anesthesia works, but if they shut down your processing centers and then restart you later, how is that different from the teleportation or simulation scenario? After all we’ve already established that whatever provides personal identity, it’s not physical continuity.
That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones.
Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says “yes”.
If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity… doesn’t it?
Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says “yes”.
If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity… doesn’t it?
Not quite. If You!Mars gave it thought before answering, his thinking probably went like this: “I have memories of going into the transporter, just a moment ago. I have a continuous sequence of memories, from then until now. Nowhere in those memories does my sense of self change. Right now I am experiencing the same sense of self I always remember experiencing, and laying down new memories. Ergo, proof by backwards induction I am the same person that walked into the teleporter.” However for that—or any—line of meta reasoning to hold, (1) your memories need to accurately correspond with the true and full history of reality and (2) you need trust that what occurs in the present also occurred in the past. In other words, it’s kinda like saying “my memory wasn’t altered because I would have remembered that.” It’s not a circular argument per se, but it is a meta loop.
The map is not the territory. What happened to You!Earth’s subjective experience is an objective, if perhaps not empirically observable fact. You!Mars’ belief about what happened may or may not correspond with reality.
What if me!Mars, after giving it thought, shakes his head and says “no, that’s not right. I say I’m the same person because I still have a sense of subjective experience, which is separate from memories or shared history, which gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and which separates me from my clones”?
Do you take his word for it? Do you assume he’s mistaken? Do you assume he’s lying?
Assuming that he acknowledges that clones have a separate identity, or in other words he admits that there can be instances of himself that are not him, then by asserting the same identity as the person that walked into the teleporter, he is making an extrapolation into the past. He is expressing a belief that by whatever definition he is using the person walking into the teleporter meets a standard of meness that the clones do not. Unless the definition under consideration explicitly reference You!Mars’ mental state (e.g. “by definition” he has shared identity with people he remembers having shared identity with), then the validity of that belief is external: it is either true or false. The map is not the territory.
Under an assumption of pattern or causal continuity, for example, it would be explicitly true. For computational continuity it would be false.
If I understood you correctly, then on your account, his claim is simply false, but he isn’t necessarily lying.
Yes?
It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.
If I understood you correctly, then on your account, his claim is simply false, but he isn’t necessarily lying.
Yes, in the sense that it is a belief about his own history which is either true or false like any historical fact. Whether it actually false depends on the nature of “personal identity”. If I understand the original post correctly, I think Eliezer would argue that his claim is true. I think Eliezer’s argument lacks sufficient justification, and there’s a good chance his claim is false.
It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.
Yes. My question is: is that belief justified?
If your memory were altered such to make you think you won the lottery, that doesn’t make you any richer. Likewise You!Mars’ memory was constructed by the transporter machine in such a way, following the transmitted design as to make him remember stepping into the transporter on Earth as you did, and walking out of it on Mars in seamless continuity. But just because he doesn’t remember the deconstruction, information transmission, and reconstruction steps doesn’t mean they didn’t happen. Once he learns what actually happened during his transport, his decision about whether he remains the same person that entered the machine on Earth depends greatly on his model of consciousness and personal identity/continuity.
It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones. Yes. My question is: is that belief justified?
That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones.
And yet, here’s Dave!Mars, who has a sense of subjective experience separate from memories or shared history which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.
But on your account, he might not have Dave’s personal identity.
So, where is this sense of subjective experience coming from, on your account? Is it causally connected to personal identity, or not?
Once he learns what actually happened during his transport, his decision about whether he remains the same person that entered the machine on Earth depends greatly on his model of consciousness and personal identity/continuity.
Yes, that’s certainly true. By the same token, if I convince you that I placed you in stasis last night for… um… long enough to disrupt your personal identity (a minute? an hour? a millisecond? a nanosecond? how long a period of “computational discontinuity” does it take for personal identity to evaporate on your account, anyway?), you would presumably conclude that you aren’t the same person who went to bed last night. OTOH, if I placed you in stasis last night and didn’t tell you, you’d conclude that you’re the same person, and live out the rest of your life none the wiser.
That experiment shows that “personal identity”, whatever that means, follows a time-tree, not a time-line. That conclusion also must hold if MWI is true.
So I get that there’s a tricky (?) labeling problem here, where it’s somewhat controversial which copy of you should be labeled as having your “personal identity”. The thing that isn’t clear to me is why the labeling problem is important. What observable feature of reality depends on the outcome of this labeling problem? We all agree on how those copies of you will act and what beliefs they’ll have. What else is there to know here?
Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn’t know your wishes, but had to extrapolate? Under what conditions would it be okay?
Also, take the more vile forms of Pascal’s mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?
Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn’t know your wishes, but had to extrapolate? Under what conditions would it be okay?
I don’t see how any of that depends on the question of which computations (copies of me) get labeled with “personal identity” and which don’t.
Also, take the more vile forms of Pascal’s mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?
Depending on specifics, yes. But I don’t see how this depends on the labeling question. This just boils down to “what do I expect to experience in the future?” which I don’t see as being related to “personal identity”.
This just boils down to “what do I expect to experience in the future?” which I don’t see as being related to “personal identity”.
Forget the phrase “personal identity”. If I am a powerful AI from the future and I come back to tell you that I will run a simulation of you so we can go bowling together, do you or do you not expect to experience bowling with me in the future, and why?
Suppose that my husband and I believe that while we’re sleeping, someone will paint a blue dot on either my forehead, or my husband’s, determined randomly. We expect to see a blue dot when we wake up… and we also expect not to see a blue dot when we wake up. This is a perfectly reasonable state for two people to be in, and not at all problematic.
Suppose I believe that while I’m sleeping, a powerful AI will duplicate me (if you like, in such a way that both duplicates experience computational continuity with the original) and paint a blue dot on one duplicate’s forehead. When I wake up, I expect to see a blue dot when I wake up… and I also expect not to see a blue dot when I wake up. This is a perfectly reasonable state for a duplicated person to be in, and not at all problematic.
Similarly, I both expect to experience bowling with you, and expect to not experience bowling with you (supposing that the original continues to operate while the simulation goes bowling).
The situation isn’t analogous, however. Let’s posit that you’re still alive when the simulation is ran. In fact, aside from technology there’s no reason to put it in the future or involve an AI. I’m a brain scanning researcher that shows up at your house tomorrow, with all the equipment to do a non-destructive mind upload and whole-brain simulation. I tell you that I am going to scan your brain, start the simulation, then don VR goggles and go virtual-bowling with “you”. Once the scanning is done you and your husband are free to go to the beach or whatever, while I go bowling with TheVirtualDave.
What probability would you put on you ending up bowling instead of at the beach?
Well, let’s call P1 my probability of actually going to the beach, even if you never show up. That is, (1-P1) is the probability that traffic keeps me from getting there, or my car breaks down, or whatever. And let’s call P2 my probability of your VR/simulation rig working. That is, (1-P2) is the probability that the scanner fails, etc. etc.
In your scenario, I put a P1 probability of ending up at the beach, and a P2 probability of ending up bowling. If both are high, then I’m confident that I will do both.
There is no “instead of”. Going to the beach does not prevent me from bowling. Going bowling does not prevent me from going to the beach. Someone will go to the beach, and someone will go bowling, and both of those someones will be me.
As I alluded to in another reply, assuming perfectly reliable scanning, and assuming that you hate losing in bowling to MarkAI, how do you decide whether to go practice bowling or to do something else you like more?
If it’s important to me not to lose in bowling, I practice bowling, since I expect to go bowling. (Assuming uninteresting scanning tech.) If it’s also important to me to show off my rocking abs at the beach, I do sit-ups, since I expect to go to the beach. If I don’t have the time to do both, I make a tradeoff, and I’m not sure exactly how I make that tradeoff, but it doesn’t include assuming that the going to the beach somehow happens more or happens less or anything like that than the going bowling.
Admittedly, this presumes that the bowling-me will go on to live a normal lifetime. If I know the simulation will be turned off right after the bowling match, I might not care so much about winning the bowling match. (Then again, I might care a lot more.) By the same token, if I know the original will be shot tomorrow morning I might not care so much abuot my abs. (Then again, I might care more. I’m really not confident about how the prospect of upcoming death affects my choices; still less how it does so when I expect to keep surviving as well.)
What is your probability that you will wake up tomorrow morning? What is your probability that you will wake up Friday morning? I expect to do both, so my probabilities of those two things add up to ~2.
In Mark’s scenario, I expect to go bowling and I expect to go to the beach. My probabilities of those two things similarly add up to ~2.
I think we have the same model of the situation, but I feel compelled to normalize my probability. A guess as to why:
I can rephrase Mark’s question as, “In 10 hours, will you remember having gone to the beach or having bowled?” (Assume the simulation will continue running!) There’ll be a you that went bowling and a you that went to the beach, but no single you that did both of those things. Your successive wakings example doesn’t have this property.
I suppose I answer 50% to indicate my uncertainty about which future self we’re talking about, since there are two possible referents. Maybe this is unhelpful.
That said, normalizing my probability as though there were only going to be one of me at the end of the process doesn’t seem at all compelling to me. I don’t have any uncertainty about which future self we’re talking about—we’re talking about both of them.
Suppose that you and your husband are planning to take the day off tomorrow, and he is planning to go bowling, and you are planning to go to the beach, and I ask the two of you “what’s y’all’s probability that one of y’all will go bowling, and what’s y’all’s probability that one of y’all will go to the beach?” It seems the correct answers to those questions will add up to more than 1, even though no one person will experience bowling AND going to the beach. In 10 hours, one of you will will remember having gone to the beach, and one will remember having bowled.
This is utterly unproblematic when we’re talking about two people.
In the duplication case, we’re still talking about two people, it’s just that right now they are both me, so I get to answer for both of them. So, in 10 hours, I (aka “one of me”) will remember having gone to the beach. I will also remember having bowled. I will not remember having gone to the beach and having bowled. And my probabilities add up to more than 1.
I recognize that it doesn’t seem that way to you, but it really does seem like the obvious way to think about it to me.
I was asking a disguised question. I really wanted to know: “which of the two future selfs do you identify with, and why?”
Oh, that’s easy. Both of them, equally. Assuming accurate enough simulations etc., of course.
ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to disprove the claim of one without also disproving the claim of the other.
ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to disprove the claim of one without also disproving the claim of the other.
Any of the models of consciousness-as-continuity would offer a definitive prediction.
Any of the models of consciousness-as-continuity would offer a definitive prediction.
IMO, there literally is no fact of the matter here, so I will bite the bullet and say that any model that supposes there is one is wrong. :) I’ll reconsider if you can point to an objective feature of reality that changes depending on the answer to this. (So-and-so will think it to be immoral doesn’t count!)
PS: The up/down karma vote isn’t a record of what you agree with, but whether a post has been reasonably argued.
It is neither of those things. This isn’t debate club. We don’t have to give people credit for finding the most clever arguments for a wrong position.
I make no comment about the subject of debate is in this context (I don’t know or care which party is saying crazy things about ‘conciousness’). I downvoted the parent specifically because it made a normative assertion about how people should use the karma mechanism which is neither something I support nor an accurate description of an accepted cultural norm. This is an example of voting being used legitimately in a way that is nothing to do with whether the post has been reasonably argued.
I did use the term “reasonably argued” but I didn’t mean clever. Maybe “rationally argued”? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.
I gave you an upvote for explaining your down vote.
I did use the term “reasonably argued” but I didn’t mean clever. Maybe “rationally argued”? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.
You are right, ‘clever’ contains connotations that you wouldn’t intend. I myself have used ‘clever’ as term of disdain and I don’t want to apply that to what you are talking about. Let’s say stick with either of the terms you used and agree that we are talking about arguments that are sound, cogent and reasonable rather than artful rhetoric that exploits known biases in human social behaviour to score persuasion points. I maintain that even then down-votes are sometimes appropriate. Allow me to illustrate.
There are two outwardly indistinguishable boxes with buttons that display heads or tails when pressed. You know that one of the boxes returns true 70% of the time, the other returns heads 40% of the time. A third party, Joe, has experimented with the first box three times and tells you that each time it returned true. This represents an argument that the first box is the “70%” box. Now, assume that I have observed the internals of the boxes and know that the first box is, in fact, the 40% box.
Whether I downvote Joe’s comment depends on many things. Obviously, tone matters a lot, as does my impression of whether Joe’s bias is based on dis-ingenuity or more innocent ignorance. But even in the case when Joe is arguing in good faith there are some cases where a policy attempting to improve the community will advocate downvoting the contribution. For example if there is a significant selection bias in what kind of evidence people like Joe have exposed themselves to then popular perception after such people share their opinions will tend to be even more biased than the individuals alone. In that case downvoting Joe’s comment improves the discussion. The ideal outcome would be for Joe to learn to stfu until he learns more.
More simply I observe that even the most ‘rational’ of arguments can be harmful if the selection process for the creation and repetition of those arguments is at all biased.
I won’t because that’s not what I’m arguing. My position is that subjective experience has moral consequences, and therefore matters.
OK, that’s fine, but I’m not convinced—I’m having trouble thinking of something that I consider to be a moral issue that doesn’t have a corresponding consequence in the territory.
PS: That downvote wasn’t me. I’m aware of how votes work around here. :)
Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele’s work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn’t want to “die” every night when the lights go off? What would it do then?
OK, in a hypothetical world where somehow pausing a conscious computation—maintaining all data such that it could be restarted losslessly—is murder, those are concerns. Agreed. I’m not arguing against that.
My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I’m looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that’s most of why we care about murder and what makes it a moral issue.)
As Eliezer mentioned in his reply (in different words), if power cycling is death, what’s the shortest suspension time that isn’t? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.
Edit: it’s actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.
What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.
But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).
In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise.
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn’t any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?
This does equal death-of-identity, and is similar to the transporter thought experiment.
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related?...
No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?
Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I’ll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.
I don’t want to, I just think that the alternatives lead to absurd outcomes that can’t possibly be correct (see my analysis of the teleporter scenario).
I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).
Prediction: TheOtherDave will say 50%, Beach!Dave and Bowling!Dave would both consider both to be the “original”. Assuming sufficiently accurate scanning & simulating.
I’ll give a 50% chance that I’ll experience that. (One copy of me continues in the “real” world, another copy of me appears in a simulation and goes bowling.)
(If you ask this question as “the AI is going to run N copies of the bowling simulation”, then I’m not sure how to answer—I’m not sure how to weight N copies of the exact same experience. My intuition is that I should still give a 50% chance, unless the simulations are going to differ in some respect, then I’d give a N/(N+1) chance.)
I need to think about your answer, as right now it doesn’t make any sense to me. I suspect that whatever intuition underlies it is the source of our disagreement/confusion.
@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.
If there were a 6th Day like service I could sign up for where if anything were to happen to me, a clone/simulation of with my memories would be created, I’d sign up for it in a heartbeat. Because if something were to happen to me I wouldn’t want to deprive my wife of her husband, or my daughters of their father. But that is purely altruistic: I would have P(~0) expectation that I would actually experience that resurrection. Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that’s fine, but let’s be clear about the difference.
If you are not the simulation the AI was referring to, then you and it will not go bowling together, period. Because when said bowling occurs, you’ll be dead. Or maybe you’ll be alive and well and off doing other things while the simulation is going on. But under no circumstances should you expect to wake up as the simulation, as we are assuming them to be causally separate.
At least from my way of thinking. I’m not sure I understand yet where you are coming from well enough to predict what you’d expect to experience.
@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.
You could understand my 50% answer to be expressing my uncertainty as to whether I’m in the simulation or not. It’s the same thing.
I don’t understand what “computational continuity” means. Can you explain it using a program that computes the digits of pi as an example?
Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that’s fine, but let’s be clear about the difference.
I think you’re making a distinction that exists only in the map, not in the territory. Can you point to something in the territory that this matters for?
I come back to tell you that I will run a simulation of you so we can go bowling together
Presumably you create a sim-me which includes the experience of having this conversation with you (the AI).
do you or do you not expect to experience bowling with me in the future, and why?
Let me interpret the term “expect” concretely as “I better go practice bowling now, so that sim-me can do well against you later” (assuming I hate losing). If I don’t particularly enjoy bowling and rather do something else, how much effort is warranted vs doing something I like?
The answer is not unambiguous and depends on how much I (meat-me) care about future sim-me having fun and not embarrassing sim-self. If sim-me continues on after meat-me passes away, I care very much about sim-me’s well being. On the other hand, if the sim-me program is halted after the bowling game, then I (meat-me) don’t care much about that sim-loser. After all, meat-me (who will not go bowling) will continue to exist, at least for a while. You might feel differently about sim-you, of course. There is a whole range of possible scenarios here. Feel free to specify one in more detail.
TL;DR: If the simulation will be the only copy of “me” in existence, I act as if I expect to experience bowling.
I’d have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table
Hmm, what about across dreamless sleep? Or fainting? Or falling and hitting your head and losing consciousness for an instant? Would these count as killing one person and creating another? And so be morally net-negative?
If computational continuity is what matters, then no. Just because you have no memory doesn’t mean you didn’t experience it. There is in fact a continuous experience throughout all of the examples you gave, just no new memories being formed. But from the last point you remember (going to sleep, fainting, hitting your head) to when you wake up, you did exist and were running a computational process. From our understanding of neurology you can be certain that there was no interruption of subjective experience of identity, even if you can’t remember what actually happened.
Whether this is also true of general anesthesia depends very much on the biochemistry going on. I admit ignorance here.
OK, I guess I should give up, too. I am utterly unable to relate to whatever it is you mean by “because you have no memory doesn’t mean you didn’t experience it” or “subjective experience of identity, even if you can’t remember what actually happened”.
Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.
I would say that yes, at T1 there’s one of me, and at T2 there’s 100 of me. I don’t see what makes “there’s 101 of me, one of which terminated at T1” more straightforward than that.
I don’t see what makes “there’s 101 of me, one of which terminated at T1” more straightforward than that.
It’s wrapped up in the question over what happened to that original copy that (maybe?) terminated at T1. Did that original version of you terminate completely and forever? Then I wouldn’t count it among the 100 copies that were created later.
Sure, obviously if it terminated then it isn’t around afterwards. Equally obviously, if it’s around afterwards, it didn’t terminate.
You said your metric for determining which description is accurate was (among other things) simplicity, and you claimed that the “101 − 1” answer is more straightforward (simpler?) than the “100″ answer. You can’t now turn around and say that the reason it’s simpler is because the “101-1” answer is accurate.
Either it’s accurate because it’s simpler, or it’s simpler because it’s accurate, but to assert both at once is illegitimate.
I’ll address this in my sequence, which hopefully I will have time to write. The short answer is that what matters isn’t which explanation of this situation is simpler, requires fewer words, a smaller number, or whatever. What matters is: which general rule is simpler?
Pattern or causal continuity leads to all sorts of weird edge cases, some of which I’ve tried to explain in my examples here, and in other cases fails (mysterious answer) to provide a definitive prediction of subjective experience. There may be other solutions, but computational continuity at the very least provides a simpler model, even if it results in the more “complex” 101-1 answer.
It’s sorta like wave collapse vs many-worlds. Wave collapse is simpler (single world), right? No. Many worlds is the simpler theory because it requires fewer rules, even though it results in a mind-bogglingly more complex and varied multiverse. In this case I think computational continuity in the way I formulated it reduces consciousness down to simple general explanation that dissolves the question with no residual problems.
Kinda like how freewill is what a decision algorithm feels like from the inside, consciousness / subjective experience is what any computational process feels like from the inside. And therefore, when the computational process terminates, so too does the subjective experience.
OK, cool, but now I’m confused. If we’re meaning the same thing, I don’t understand how it can be a question—“not running” isn’t a thing an algorithm can experience; it’s a logical impossibility.
Clearly, your subjective experience today is HONK-screech-bam-oblivion, since all the subjective experiences that come after that don’t happen today in this example… they happen 50 years later.
It is not in the least bit clear to me that this means those subjective experiences aren’t your subjective experiences. You aren’t some epiphenomenal entity that dissipates in the course of those 50 years and therefore isn’t around to experience those experiences when they happen… whatever is having those subjective experiences, whenever it is having them, that’s you.
maybe we should taboo the phrase “personal/subjective identity”.
Sounds like a fine plan, albeit a difficult one. Want to take a shot at it?
EDIT: Ah, you did so elsethread. Cool. Replied there.
Because the notion of “me” is not an ontologically basic category and the question of whether the “real me” wakes up is a question that aught to be un-asked.
I’m a bit confused at the question...you articulated my intent with that sentence perfectly in your other post.
and, as TheOtherDave said,
EDIT: Nevermind, I now understand which part of my statement you misunderstood.
I’m not accepting-but-not-elevating the idea that the ’Real me” doesn’t wake up on the other side. Rather, I’m saying that the questions of personal identity over time do not make sense in the first place. It’s like asking “which color is the most moist”?
The root of your philosophical dilemma is that “personal identity” is a conceptual substitution for soul—a subjective thread that connects you over space and time.
No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only “thread’ connecting you to your past experiences is your current subjective experience of remembering them. That’s all.
I always wonder how I should treat my future self if I reject the continuity of self. Should I think of him like a son? A spouse? A stranger? Should I let him get fat? Not get him a degree? Invest in stock for him? Give him another child?
I think it matters in so far as assisting your present trajectory. Otherwise it might as well be an unfeeling entity.
I have a strong subjective experience of moment-to-moment continuity, even if only in the fleeting present. Simply saying “no such thing exists” doesn’t do anything to resolve the underlying confusion. If no such thing as personal identity exists, then why do I experience it? What is the underlying insight that eliminates the question?
This is not an abstract question either. It has huge implications for the construction of timeless decision theory and utilitarian metamorality.
“a strong subjective experience of moment-to-moment continuity” is an artifact of the algorithm your brain implements. It certainly exists in as much as the algorithm itself exists. So does your personal identity. If in the future it becomes possible to run the same algorithm on a different hardware, it will still produce this sense of personal identity and will feel like “you” from the inside.
Yes, I’m not questioning whether a future simulation / emulation of me would have an identical subjective experience. To reject that would be a retreat to epiphenomenalism.
Let me rephrase the question, so as to expose the problem: if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?
Yes, I realize that in both cases result in a computer simulation of Mark in 2063 claiming to have just woken up in the brain scanner, with a subjective feeling of continuity. But is that belief true? In the two situations there’s a very different outcome for the Mark of 2013. If you can’t see that, then I think we are talking about different things, and maybe we should taboo the phrase “personal/subjective identity”.
Ah, hopefully I’m slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then “wake up in a computer”. You are concerned with… I’m having trouble articulated what exactly… something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year “oblivion”, you wouldn’t be?
Pretty much correct. To be specific, if computational continuity is what matters, then Mark!2063 has my memories, but was in fact “born” the moment the simulation started, 50 years in the future. That’s when his identity began, whereas mine ended when I died in 2013.
This seems a little more intuitive when you consider switching on 100 different emulations of me at the same time. Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.
No, that would make little difference as it’s pretty clear that physical continuity is an illusion. If pattern or causal continuity were correct, then it’d be fine, but both theories introduce other problems. If computational continuity is correct, then a reconstructed brain wouldn’t be me any more than a simulation would. However it’s possible that my cryogenically vitrified brain would preserve identity, if it were slowly brought back online without interruption.
I’d have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table (until then, it scares the crap out of me). Likewise, a AI or emulation running on a computer that is powered off and then later resumed would also break identity, but depending on the underlying nature of computation & subjective experience, task switching and online suspend/resume may or may not result in cycling identity.
I’ll stop there because I’m trying to formulate all these thoughts into a longer post, or maybe a sequence of posts.
Can you taboo “personal identity”? I don’t understand what important thing you could lose by going under general anesthesia.
It’s easier to explain in the case of multiple copies of yourself. Imagine the transporter were turned into a replicator—it gets stuck in a loop reconstructing the last thing that went through it, namely you. You step off and turn around to find another version of you just coming out. And then another, and another, etc. Each one of you shares the same memories, but from that moment on you have diverged. Each clone continues life with their own subjective experience until that experience is terminated by that clone’s death.
That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones. You are welcome to suggest a better term.
The replicator / clone thought experiment shows that “subjective experience of identity” is something different from the information pattern that represents your mind. There is something, although at this moment that something is not well defined, which makes you the same “you” that will exist five minutes in the future, but which separates you from the “you”s that walked out of the replicator, or exist in simulation, for example.
The first step is recognizing this distinction. Then turn around and apply it to less fantastical situations. If the clone is “you” but not you (meaning no shared identity, and my apologies for the weak terminology), then what’s to say that a future simulation of “you” would also be you? What about cryonics, will your unfrozen brain still be you? That might depend on what they do to repair damage from vitrification. What about general anesthesia? Again, I need to learn more about how general anesthesia works, but if they shut down your processing centers and then restart you later, how is that different from the teleportation or simulation scenario? After all we’ve already established that whatever provides personal identity, it’s not physical continuity.
Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says “yes”.
If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity… doesn’t it?
Not quite. If You!Mars gave it thought before answering, his thinking probably went like this: “I have memories of going into the transporter, just a moment ago. I have a continuous sequence of memories, from then until now. Nowhere in those memories does my sense of self change. Right now I am experiencing the same sense of self I always remember experiencing, and laying down new memories. Ergo, proof by backwards induction I am the same person that walked into the teleporter.” However for that—or any—line of meta reasoning to hold, (1) your memories need to accurately correspond with the true and full history of reality and (2) you need trust that what occurs in the present also occurred in the past. In other words, it’s kinda like saying “my memory wasn’t altered because I would have remembered that.” It’s not a circular argument per se, but it is a meta loop.
The map is not the territory. What happened to You!Earth’s subjective experience is an objective, if perhaps not empirically observable fact. You!Mars’ belief about what happened may or may not correspond with reality.
What if me!Mars, after giving it thought, shakes his head and says “no, that’s not right. I say I’m the same person because I still have a sense of subjective experience, which is separate from memories or shared history, which gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and which separates me from my clones”?
Do you take his word for it?
Do you assume he’s mistaken?
Do you assume he’s lying?
Assuming that he acknowledges that clones have a separate identity, or in other words he admits that there can be instances of himself that are not him, then by asserting the same identity as the person that walked into the teleporter, he is making an extrapolation into the past. He is expressing a belief that by whatever definition he is using the person walking into the teleporter meets a standard of meness that the clones do not. Unless the definition under consideration explicitly reference You!Mars’ mental state (e.g. “by definition” he has shared identity with people he remembers having shared identity with), then the validity of that belief is external: it is either true or false. The map is not the territory.
Under an assumption of pattern or causal continuity, for example, it would be explicitly true. For computational continuity it would be false.
If I understood you correctly, then on your account, his claim is simply false, but he isn’t necessarily lying.
Yes?
It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.
Yes?
Yes, in the sense that it is a belief about his own history which is either true or false like any historical fact. Whether it actually false depends on the nature of “personal identity”. If I understand the original post correctly, I think Eliezer would argue that his claim is true. I think Eliezer’s argument lacks sufficient justification, and there’s a good chance his claim is false.
Yes. My question is: is that belief justified?
If your memory were altered such to make you think you won the lottery, that doesn’t make you any richer. Likewise You!Mars’ memory was constructed by the transporter machine in such a way, following the transmitted design as to make him remember stepping into the transporter on Earth as you did, and walking out of it on Mars in seamless continuity. But just because he doesn’t remember the deconstruction, information transmission, and reconstruction steps doesn’t mean they didn’t happen. Once he learns what actually happened during his transport, his decision about whether he remains the same person that entered the machine on Earth depends greatly on his model of consciousness and personal identity/continuity.
OK, understood.
Here’s my confusion: a while back, you said:
And yet, here’s Dave!Mars, who has a sense of subjective experience separate from memories or shared history which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.
But on your account, he might not have Dave’s personal identity.
So, where is this sense of subjective experience coming from, on your account? Is it causally connected to personal identity, or not?
Yes, that’s certainly true. By the same token, if I convince you that I placed you in stasis last night for… um… long enough to disrupt your personal identity (a minute? an hour? a millisecond? a nanosecond? how long a period of “computational discontinuity” does it take for personal identity to evaporate on your account, anyway?), you would presumably conclude that you aren’t the same person who went to bed last night. OTOH, if I placed you in stasis last night and didn’t tell you, you’d conclude that you’re the same person, and live out the rest of your life none the wiser.
That experiment shows that “personal identity”, whatever that means, follows a time-tree, not a time-line. That conclusion also must hold if MWI is true.
So I get that there’s a tricky (?) labeling problem here, where it’s somewhat controversial which copy of you should be labeled as having your “personal identity”. The thing that isn’t clear to me is why the labeling problem is important. What observable feature of reality depends on the outcome of this labeling problem? We all agree on how those copies of you will act and what beliefs they’ll have. What else is there to know here?
Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn’t know your wishes, but had to extrapolate? Under what conditions would it be okay?
Also, take the more vile forms of Pascal’s mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?
I don’t see how any of that depends on the question of which computations (copies of me) get labeled with “personal identity” and which don’t.
Depending on specifics, yes. But I don’t see how this depends on the labeling question. This just boils down to “what do I expect to experience in the future?” which I don’t see as being related to “personal identity”.
Forget the phrase “personal identity”. If I am a powerful AI from the future and I come back to tell you that I will run a simulation of you so we can go bowling together, do you or do you not expect to experience bowling with me in the future, and why?
Yes, with probability P(simulation), or no, with probability P(not simulation), depending.
Suppose that my husband and I believe that while we’re sleeping, someone will paint a blue dot on either my forehead, or my husband’s, determined randomly. We expect to see a blue dot when we wake up… and we also expect not to see a blue dot when we wake up. This is a perfectly reasonable state for two people to be in, and not at all problematic.
Suppose I believe that while I’m sleeping, a powerful AI will duplicate me (if you like, in such a way that both duplicates experience computational continuity with the original) and paint a blue dot on one duplicate’s forehead. When I wake up, I expect to see a blue dot when I wake up… and I also expect not to see a blue dot when I wake up. This is a perfectly reasonable state for a duplicated person to be in, and not at all problematic.
Similarly, I both expect to experience bowling with you, and expect to not experience bowling with you (supposing that the original continues to operate while the simulation goes bowling).
The situation isn’t analogous, however. Let’s posit that you’re still alive when the simulation is ran. In fact, aside from technology there’s no reason to put it in the future or involve an AI. I’m a brain scanning researcher that shows up at your house tomorrow, with all the equipment to do a non-destructive mind upload and whole-brain simulation. I tell you that I am going to scan your brain, start the simulation, then don VR goggles and go virtual-bowling with “you”. Once the scanning is done you and your husband are free to go to the beach or whatever, while I go bowling with TheVirtualDave.
What probability would you put on you ending up bowling instead of at the beach?
Well, let’s call P1 my probability of actually going to the beach, even if you never show up. That is, (1-P1) is the probability that traffic keeps me from getting there, or my car breaks down, or whatever. And let’s call P2 my probability of your VR/simulation rig working. That is, (1-P2) is the probability that the scanner fails, etc. etc.
In your scenario, I put a P1 probability of ending up at the beach, and a P2 probability of ending up bowling. If both are high, then I’m confident that I will do both.
There is no “instead of”. Going to the beach does not prevent me from bowling. Going bowling does not prevent me from going to the beach. Someone will go to the beach, and someone will go bowling, and both of those someones will be me.
As I alluded to in another reply, assuming perfectly reliable scanning, and assuming that you hate losing in bowling to MarkAI, how do you decide whether to go practice bowling or to do something else you like more?
If it’s important to me not to lose in bowling, I practice bowling, since I expect to go bowling. (Assuming uninteresting scanning tech.)
If it’s also important to me to show off my rocking abs at the beach, I do sit-ups, since I expect to go to the beach.
If I don’t have the time to do both, I make a tradeoff, and I’m not sure exactly how I make that tradeoff, but it doesn’t include assuming that the going to the beach somehow happens more or happens less or anything like that than the going bowling.
Admittedly, this presumes that the bowling-me will go on to live a normal lifetime. If I know the simulation will be turned off right after the bowling match, I might not care so much about winning the bowling match. (Then again, I might care a lot more.) By the same token, if I know the original will be shot tomorrow morning I might not care so much abuot my abs. (Then again, I might care more. I’m really not confident about how the prospect of upcoming death affects my choices; still less how it does so when I expect to keep surviving as well.)
Your probabilities add up to more than 1...
Of course they do. Why shouldn’t they?
What is your probability that you will wake up tomorrow morning?
What is your probability that you will wake up Friday morning?
I expect to do both, so my probabilities of those two things add up to ~2.
In Mark’s scenario, I expect to go bowling and I expect to go to the beach.
My probabilities of those two things similarly add up to ~2.
I think we have the same model of the situation, but I feel compelled to normalize my probability. A guess as to why:
I can rephrase Mark’s question as, “In 10 hours, will you remember having gone to the beach or having bowled?” (Assume the simulation will continue running!) There’ll be a you that went bowling and a you that went to the beach, but no single you that did both of those things. Your successive wakings example doesn’t have this property.
I suppose I answer 50% to indicate my uncertainty about which future self we’re talking about, since there are two possible referents. Maybe this is unhelpful.
Yes, that seems to be what’s going on.
That said, normalizing my probability as though there were only going to be one of me at the end of the process doesn’t seem at all compelling to me. I don’t have any uncertainty about which future self we’re talking about—we’re talking about both of them.
Suppose that you and your husband are planning to take the day off tomorrow, and he is planning to go bowling, and you are planning to go to the beach, and I ask the two of you “what’s y’all’s probability that one of y’all will go bowling, and what’s y’all’s probability that one of y’all will go to the beach?” It seems the correct answers to those questions will add up to more than 1, even though no one person will experience bowling AND going to the beach. In 10 hours, one of you will will remember having gone to the beach, and one will remember having bowled.
This is utterly unproblematic when we’re talking about two people.
In the duplication case, we’re still talking about two people, it’s just that right now they are both me, so I get to answer for both of them. So, in 10 hours, I (aka “one of me”) will remember having gone to the beach. I will also remember having bowled. I will not remember having gone to the beach and having bowled. And my probabilities add up to more than 1.
I recognize that it doesn’t seem that way to you, but it really does seem like the obvious way to think about it to me.
I think your description is coherent and describes the same model of reality I have. :)
Yes. Probabilities aside, this is what I was asking.
I was asking a disguised question. I really wanted to know: “which of the two future selfs do you identify with, and why?”
Oh, that’s easy. Both of them, equally. Assuming accurate enough simulations etc., of course.
ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to disprove the claim of one without also disproving the claim of the other.
Any of the models of consciousness-as-continuity would offer a definitive prediction.
IMO, there literally is no fact of the matter here, so I will bite the bullet and say that any model that supposes there is one is wrong. :) I’ll reconsider if you can point to an objective feature of reality that changes depending on the answer to this. (So-and-so will think it to be immoral doesn’t count!)
I won’t because that’s not what I’m arguing. My position is that subjective experience has moral consequences, and therefore matters.
PS: The up/down karma vote isn’t a record of what you agree with, but whether a post has been reasonably argued.
For many people, the up/down karma vote is a record of what we want more/less of.
It is neither of those things. This isn’t debate club. We don’t have to give people credit for finding the most clever arguments for a wrong position.
I make no comment about the subject of debate is in this context (I don’t know or care which party is saying crazy things about ‘conciousness’). I downvoted the parent specifically because it made a normative assertion about how people should use the karma mechanism which is neither something I support nor an accurate description of an accepted cultural norm. This is an example of voting being used legitimately in a way that is nothing to do with whether the post has been reasonably argued.
I did use the term “reasonably argued” but I didn’t mean clever. Maybe “rationally argued”? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.
I gave you an upvote for explaining your down vote.
You are right, ‘clever’ contains connotations that you wouldn’t intend. I myself have used ‘clever’ as term of disdain and I don’t want to apply that to what you are talking about. Let’s say stick with either of the terms you used and agree that we are talking about arguments that are sound, cogent and reasonable rather than artful rhetoric that exploits known biases in human social behaviour to score persuasion points. I maintain that even then down-votes are sometimes appropriate. Allow me to illustrate.
There are two outwardly indistinguishable boxes with buttons that display heads or tails when pressed. You know that one of the boxes returns true 70% of the time, the other returns heads 40% of the time. A third party, Joe, has experimented with the first box three times and tells you that each time it returned true. This represents an argument that the first box is the “70%” box. Now, assume that I have observed the internals of the boxes and know that the first box is, in fact, the 40% box.
Whether I downvote Joe’s comment depends on many things. Obviously, tone matters a lot, as does my impression of whether Joe’s bias is based on dis-ingenuity or more innocent ignorance. But even in the case when Joe is arguing in good faith there are some cases where a policy attempting to improve the community will advocate downvoting the contribution. For example if there is a significant selection bias in what kind of evidence people like Joe have exposed themselves to then popular perception after such people share their opinions will tend to be even more biased than the individuals alone. In that case downvoting Joe’s comment improves the discussion. The ideal outcome would be for Joe to learn to stfu until he learns more.
More simply I observe that even the most ‘rational’ of arguments can be harmful if the selection process for the creation and repetition of those arguments is at all biased.
OK, that’s fine, but I’m not convinced—I’m having trouble thinking of something that I consider to be a moral issue that doesn’t have a corresponding consequence in the territory.
PS: That downvote wasn’t me. I’m aware of how votes work around here. :)
Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele’s work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn’t want to “die” every night when the lights go off? What would it do then?
OK, in a hypothetical world where somehow pausing a conscious computation—maintaining all data such that it could be restarted losslessly—is murder, those are concerns. Agreed. I’m not arguing against that.
My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I’m looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that’s most of why we care about murder and what makes it a moral issue.)
As Eliezer mentioned in his reply (in different words), if power cycling is death, what’s the shortest suspension time that isn’t? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.
Edit: it’s actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.
What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.
But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn’t any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.
When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?
Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I’ll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.
I don’t want to, I just think that the alternatives lead to absurd outcomes that can’t possibly be correct (see my analysis of the teleporter scenario).
I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).
Prediction: TheOtherDave will say 50%, Beach!Dave and Bowling!Dave would both consider both to be the “original”. Assuming sufficiently accurate scanning & simulating.
Here’s what TheOtherDave actually said.
Yes, looks like that prediction is falsified. At least the first sentence. :)
I’ll give a 50% chance that I’ll experience that. (One copy of me continues in the “real” world, another copy of me appears in a simulation and goes bowling.)
(If you ask this question as “the AI is going to run N copies of the bowling simulation”, then I’m not sure how to answer—I’m not sure how to weight N copies of the exact same experience. My intuition is that I should still give a 50% chance, unless the simulations are going to differ in some respect, then I’d give a N/(N+1) chance.)
I need to think about your answer, as right now it doesn’t make any sense to me. I suspect that whatever intuition underlies it is the source of our disagreement/confusion.
@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.
If there were a 6th Day like service I could sign up for where if anything were to happen to me, a clone/simulation of with my memories would be created, I’d sign up for it in a heartbeat. Because if something were to happen to me I wouldn’t want to deprive my wife of her husband, or my daughters of their father. But that is purely altruistic: I would have P(~0) expectation that I would actually experience that resurrection. Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that’s fine, but let’s be clear about the difference.
If you are not the simulation the AI was referring to, then you and it will not go bowling together, period. Because when said bowling occurs, you’ll be dead. Or maybe you’ll be alive and well and off doing other things while the simulation is going on. But under no circumstances should you expect to wake up as the simulation, as we are assuming them to be causally separate.
At least from my way of thinking. I’m not sure I understand yet where you are coming from well enough to predict what you’d expect to experience.
You could understand my 50% answer to be expressing my uncertainty as to whether I’m in the simulation or not. It’s the same thing.
I don’t understand what “computational continuity” means. Can you explain it using a program that computes the digits of pi as an example?
I think you’re making a distinction that exists only in the map, not in the territory. Can you point to something in the territory that this matters for?
Presumably you create a sim-me which includes the experience of having this conversation with you (the AI).
Let me interpret the term “expect” concretely as “I better go practice bowling now, so that sim-me can do well against you later” (assuming I hate losing). If I don’t particularly enjoy bowling and rather do something else, how much effort is warranted vs doing something I like?
The answer is not unambiguous and depends on how much I (meat-me) care about future sim-me having fun and not embarrassing sim-self. If sim-me continues on after meat-me passes away, I care very much about sim-me’s well being. On the other hand, if the sim-me program is halted after the bowling game, then I (meat-me) don’t care much about that sim-loser. After all, meat-me (who will not go bowling) will continue to exist, at least for a while. You might feel differently about sim-you, of course. There is a whole range of possible scenarios here. Feel free to specify one in more detail.
TL;DR: If the simulation will be the only copy of “me” in existence, I act as if I expect to experience bowling.
Hmm, what about across dreamless sleep? Or fainting? Or falling and hitting your head and losing consciousness for an instant? Would these count as killing one person and creating another? And so be morally net-negative?
If computational continuity is what matters, then no. Just because you have no memory doesn’t mean you didn’t experience it. There is in fact a continuous experience throughout all of the examples you gave, just no new memories being formed. But from the last point you remember (going to sleep, fainting, hitting your head) to when you wake up, you did exist and were running a computational process. From our understanding of neurology you can be certain that there was no interruption of subjective experience of identity, even if you can’t remember what actually happened.
Whether this is also true of general anesthesia depends very much on the biochemistry going on. I admit ignorance here.
OK, I guess I should give up, too. I am utterly unable to relate to whatever it is you mean by “because you have no memory doesn’t mean you didn’t experience it” or “subjective experience of identity, even if you can’t remember what actually happened”.
I would say that yes, at T1 there’s one of me, and at T2 there’s 100 of me.
I don’t see what makes “there’s 101 of me, one of which terminated at T1” more straightforward than that.
It’s wrapped up in the question over what happened to that original copy that (maybe?) terminated at T1. Did that original version of you terminate completely and forever? Then I wouldn’t count it among the 100 copies that were created later.
Sure, obviously if it terminated then it isn’t around afterwards.
Equally obviously, if it’s around afterwards, it didn’t terminate.
You said your metric for determining which description is accurate was (among other things) simplicity, and you claimed that the “101 − 1” answer is more straightforward (simpler?) than the “100″ answer.
You can’t now turn around and say that the reason it’s simpler is because the “101-1” answer is accurate.
Either it’s accurate because it’s simpler, or it’s simpler because it’s accurate, but to assert both at once is illegitimate.
I’ll address this in my sequence, which hopefully I will have time to write. The short answer is that what matters isn’t which explanation of this situation is simpler, requires fewer words, a smaller number, or whatever. What matters is: which general rule is simpler?
Pattern or causal continuity leads to all sorts of weird edge cases, some of which I’ve tried to explain in my examples here, and in other cases fails (mysterious answer) to provide a definitive prediction of subjective experience. There may be other solutions, but computational continuity at the very least provides a simpler model, even if it results in the more “complex” 101-1 answer.
It’s sorta like wave collapse vs many-worlds. Wave collapse is simpler (single world), right? No. Many worlds is the simpler theory because it requires fewer rules, even though it results in a mind-bogglingly more complex and varied multiverse. In this case I think computational continuity in the way I formulated it reduces consciousness down to simple general explanation that dissolves the question with no residual problems.
Kinda like how freewill is what a decision algorithm feels like from the inside, consciousness / subjective experience is what any computational process feels like from the inside. And therefore, when the computational process terminates, so too does the subjective experience.
Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.
No disagreement here—that’s what I meant by oblivion.
OK, cool, but now I’m confused. If we’re meaning the same thing, I don’t understand how it can be a question—“not running” isn’t a thing an algorithm can experience; it’s a logical impossibility.
Clearly, your subjective experience today is HONK-screech-bam-oblivion, since all the subjective experiences that come after that don’t happen today in this example… they happen 50 years later.
It is not in the least bit clear to me that this means those subjective experiences aren’t your subjective experiences. You aren’t some epiphenomenal entity that dissipates in the course of those 50 years and therefore isn’t around to experience those experiences when they happen… whatever is having those subjective experiences, whenever it is having them, that’s you.
Sounds like a fine plan, albeit a difficult one. Want to take a shot at it?
EDIT: Ah, you did so elsethread. Cool. Replied there.