I’m really surprised that so many people here think this way. Even a bit shocked. (Especially pengvado with N=1(!)). Which on the whole is a great experience :-)
Could someone try to explain to me, please, why you feel this way? How you came to feel this way? I have never seen any reason to stop thinking in terms of a thread of consciousness.
Could someone try to explain to me, please, why you feel this way? How you came to feel this way?
Feel probably isn’t the right word here. I’m sure we all feel the same kind of irreducible subjective thread of consciousness that you do. Just like we all have the same illusions of free will and objective morality. But on careful reflection, taking into account all the science you know, and performing a few relevant thought experiments, it gradually becomes clear that these folk concepts just don’t make sense. The species-typical intuitions don’t go away; you just learn to stop trusting them.
Free will and objective morality are claims about how the universe works, objectively. On reflection it becomes clear that they are contradictory and false, respectively.
But the subjective thread of consciousness isn’t a fact about the universe. It’s a fact about my experience. It makes no sense to say, as you seem to be suggesting, “I may feel conscious, but really it’s an illusion”. Because if I deny it, then the whole concept of feeling is undefined, and consequently, the concept of illusion is undefined. The idea of illusions, after all, implies that we might instead experience or believe something else which is not an illusion but is true.
You can’t claim that subjective thread consciousness is “wrong” because it’s not an objective, empirical claim. We can imagine experiencing counterfactuals, but what would it be like not to have experiences? It’s not a meaningful question, so there’s no answer.
What I was asking is how, due to objective, physical events (you had ideas, read books...) you came to adopt this belief—although I don’t quite understand the belief yet, either.
Just look at all this “reality” business as a framework for understanding experience: how do you know that “reality” is “out there”? Why do you believe such claims? You are not entitled to your subjective experience, no more than to believing that Venus the planet is a goddess of love and beauty.
I’m going to pull my parenthetical about spells of unconsciousness out in a separate comment, because the point seems to have been lost in the course of the discussion:
DanArmak, you seem to propose that the key process which defines the identity of a person at one time with a person at the other time is the thread of consciousness trailing through spacetime from one to the other. How does your model deal with human beings—individual human organisms! - which undergo literal loss of consciousness? I do not refer to sleep, but to the actual shutdown of conscious perception, such as occurred to Jo Walton (papersky) when she hit her head and (I have heard) to many people under general anesthesia. In these cases, the persons describing their memories explicitly state that there is a finite period of time during which no conscious recollection occurs. Does that imply that the speculative fiction writer named Jo Walton living in Montreal on February 20th, 2006 is a different individual than the speculative fiction writer named Jo Walton living in Montreal on February 22nd, 2006? If not, why not?
To address the loss of consciousness scenario: I can’t speak from experience, and as I said, my theory is not formal and strict and provable enough to be sure of things outside my experience.
The basic problem here is discontinuity. If the loss of consciousness is brief enough (a fraction of a second) and does not affect my future mental processes, it seems likely “I” will continue to exist. Any other boundary (of length and severity of unconsciousness) would be arbitrary, and so unlikely.
But my experience is discrete. I always have exactly one experience at a time (if conscious), so I always experience being “me”, not three-quarters me. “Me” is whatever “I” experience at the time :-) This is no good as a definition, but it’s a description every human understands. How to reconcile the two? I have no clear idea.
As I said before, my theory is far from complete—it’s more a list of facts than a structured model. It only describes those things that happen in typical human life. It may not be extensible to events like loss of consciousness, let alone cloning. In fact I’ve been pretty much convinced by this whole thread that my naive model probably can’t be fixed and extended to describe the entire space of physical and experential possibilities. I’ll drop it happily for a better alternative—please give me one!
Any new theory has got to include the fact that I have actual experiences of being me. The theories being proposed here of anticipating equally to “become” one of my future clones, smack to me of just doing away with the conception of anticipation entirely.
As I said before, my theory is far from complete—it’s more a list of facts than a structured model. It only describes those things that happen in typical human life. It may not be extensible to events like loss of consciousness, let alone cloning. In fact I’ve been pretty much convinced by this whole thread that my naive model probably can’t be fixed and extended to describe the entire space of physical and experential possibilities. I’ll drop it happily for a better alternative—please give me one!
I think we have, at least in sketch form—here’s Alicorn’s nutshell summary, and here’s mine. Both of our theories, if they are distinct, fit this intuition of yours—that a person is not destroyed and a new person created after a spell of unconsciousness—better that the thread of consciousness approach.
As for the rest, quite frankly you should expect to get weird results in weird situations like duplication. One weird result I expect is that, if you are duplicated, there will be two people afterwards, both of whose experiences suggest that they are DanArmak.
I took the time to think all this through before replying. I think I grok now your and Alicorn and the other posters’ theory(s). And I pretty much accept it now. Thanks for your explanations.
The problem with my old approach, as I now see it, is the impossibility of empirically distinguishing it from infinitely many other possible theories. In such a situation, it is indeed best to choose an approach that optimizes outcome over all my configuration-descendants, because I might subjectively become any of them.
Of course, if I give up personal continuity, then the above statement becomes merely “because each of them will have memories indicating it is my descendant”. But I am forced to this point of view due to the apparent impossibility of describing a personal continuity in terms of physics, which does not break down in the face of (arbitrarily short) lapses of consciousness.
Thanks again to everyone else who participated and helped convince me.
I don’t see any reason to privilege the thread of consciousness—I’m confident it doesn’t actually work the way you’re supposing. My personal instinct is that I at every instant am identical to this particular configuration of particles, and given that such a configuration of particles will persist after the experiment (though on the other side of the world), it doesn’t seem particularly as if I’ve been killed in any permanent way. (I’m fairly sure I couldn’t collect on my estate, for example.) Sure, it’s risky, but if sufficient safeguards are in place, it’s teleporting, as pengvado said (?).
A note: even if I hadn’t had this instinct before, the idea of a persistent and real thread of consciousness is brought into doubt in a number of ways by Daniel Dennett’s revolutionary work, Consciousness Explained. My copy is on my shelf at home at the moment, but Dennett explains several instances in which the naive perception of consciousness is shown to be unreliable. I don’t think it’s a valid marker to use to identify identity.
(Besides, what of spells of unconsciousness? Should someone whose thread of consciousness is interrupted be considered to have been literally killed and reborn as a facsimile?)
I don’t see any reason to privilege the thread of consciousness
I offer you a choice: either you suffer torture for an hour. Or I create a clone of you, torture it for ten hours, and then kill it. In the second option, you are not affected in any way.
From what you’ve said, I gather you’ll choose the first option. You won’t privilege what you actually experience. But… I truly cannot understand why.
It’s often useful to think as if you’re deciding for all copies of yourself. You can maximize each copy’s expected outcome that way in many situations. You can also optimize your expected experience if you don’t know in advance which of ten thousand copies you’ll be. This kind of argument has often been made on LW. Perhaps I mistook some instances of arguments like yours, which truly don’t privilege experience, for this milder version (which I endorse).
You’re begging a very important question when you use “you” to refer only to the template for subsequent duplication. On my wacky view, if you duplicate me perfectly, she’s also me. If it’s time T1 and you’re going to duplicate Alicorn-T1 at T2, then Alicorn-T1 has two futures—Alicorn-T2 and Alicorn-T2\* - and Alicorn-T1 will make advance choices for them both just as if no duplication occurred. If you speak to Alicorn-T1 about the future in which duplication occurs, “you” is plural.
The ‘you’ I used referred only to the pre-duplication person, who is making the choice, and who is singular.
The view you describe is the view I described in the last paragraph of my previous comment (the one you replied to). I understand and agree that you decide things for all identical copies of you who may appear in the next second—because they’re identical, they preserve your decisions. But you can only anticipate experiencing some one thing, not a plurality.
If someone creates a clone (or several) of me, I may not even know about it; and I do not expect to experience anything differently due to the existence of that clone.
If someone destroys my body, I presume I’ll stop experiencing, although I have no idea what that would be like.
By inference, if someone creates a precise clone of me elsewhere and destroys my body at the same moment, I won’t suddenly start experiencing the clone’s life. I.e., I don’t expect to suddenly experience a complete shift in location. Rather, I would experience the same thing (or lack of it) that I would experience if someone killed me without creating a clone.
Yes, this begs the question of why I experience continuity in this body, since physics has no concept of a continuous body. And why do I experience continuity across sleep and unconsciousness? I don’t have an answer, but neither do the alternative you’re proposing. The only real answer I’ve seen is the timeless hypothesis: that at each moment I have a separate moment-experience, which happens to include memories of previous experiences, but they are not necessarily true—they are just the way my brain makes sense of the universe, and it highlights or even invents continuity. But this is too much like the Boltzmann’s Brain conjecture—consistent and with explanatory power, but unsatisfying.
“You” sure seemed like it referred to only one of the postduplication individuals here:
In the second option, you are not affected in any way.
(This when one of me seems to be quite seriously affected, in that you plan to torture and kill that one.)
And here:
You won’t privilege what youactually experience.
(This when I do privilege what I actually experience, and simply think of “I” in these futures as a plural.)
you don’t know in advance which of ten thousand copies you’ll be
(But I’ll be all of them! It’s not as though 9,999 of these people are p-zombies or strangers or even just brand-new genetically identical twins! They’re my futures!)
Er, the second option is the one where I kill the clone. In both options, only you remain alive after 10 hours, no clone.
How about this cleaner version: I create a clone of you (no choice here). Then I torture you for an hour, OR the clone for ten hours, after which you’re both free to go.
How about this: choose between 59 minutes of torture for you and 10 hours for the clone, vs 1 hour for you and the clone, with the experience for both you and the clone being indistinguishable for the first 59 minutes.
If you choose the 59min/10hrs, what’s going through your mind in minute 59 ? Is it “this is all about to stop, but some other poor bastard is going to have a rough 9 hours” ? Or is it “ohgodohgodohgod I hope I’m not the clone” ?
In your new formulation, I’d choose the 1 hour for both of us and we (both copies of me) would both expect it to be over soon at the 59th minute. My copies and I would be in agreement in our expectations of each other’s behavior.
I identify closely with anything sufficiently similar to me—including close past and future versions of me. For instance, if there was a copy of me made an hour ago (whether or not the copy had runtime during that hour), and he or I were given the choice during your test, we would choose the same thing, as mentioned above.
If you choose the 59min/10hrs, what’s going through your mind in minute 59 ? Is it “this is all about to stop, but some other poor bastard is going to have a rough 9 hours” ? Or is it “ohgodohgodohgod I hope I’m not the clone” ?
It’s true that both the original, and the clone, don’t know if they’re the clone or not at minute 59. But the original, who really made the decision before the clone was created, correctly optimized his own future experience to have 59 minutes of torture instead of an hour. The original doesn’t care about the clone’s experiences. (That is, I wouldn’t care.)
I changed my mind since then. So I would make different decisions now… more in line with what others here have been proposing.
I would try to optimize for all projected future clones. But in a scenario where I know some clones are going to die no matter what they do (your previous question), I would partially discount the experiences such clones have before they die and try to optimize more for the experiences of the survivor. That’s just my personal preference: the lifelong memory of the survivor matters more than the precise terminal existence of the killed clone.
Regarding your new questions about anticipation, under the new theory that has no concept of personal continuity, there doesn’t seem to be such a thing as personal survival where duplication&termination are involved.
...I see. That doesn’t change my answer, as it happens; my clone dies, yes, and you bear moral culpability for it, but it is better for one to make it out (relatively) unscathed than for the only survivor to be traumatized. In the new version, I would prefer there be only one hour of torture between us, and accept the first option.
See, the thing is: in my utility function, I don’t have a special ranking for “my” experiences over everyone else’s. When I do the math, I come out paying a lot more attention to my own situation for purely pragmatic reasons.
Even given your theory and your utility function, I don’t see how your clone’s 10 hours of torture and subsequent death would leave you traumatized. Isn’t it best for the survivor to have experienced no torture at all (so we’d torture the clone)?
Also, regarding Scenario B: imagine that I decided to get in on it, only with a variation. Instead of duplicating you and offering those two options—either your original be tortured for an hour or your duplicate be tortured for ten—I created an entirely new individual, no more similar to you than any other human being, and gave you the choice between being tortured yourself for an hour and the new guy being tortured for ten.
Which would you choose? Me, I think it’s perfectly obvious that it is better for less torture to occur.
Better for whom? To me it’s perfectly obvious that it’s better for me to have someone else tortured, and I would choose that. I would only choose to be hurt myself if the tradeoff was very unequal (a speck in the eye for me, ten hours of torture for him), and even then I would soon stop agreeing to be hurt if I had to face such a choice repeatedly.
If someone were to mount a campaign to stop your entire torturing project, and if I could participate by being hurt (but not by endangering my life), then I would agree to pay a much higher price. But that’s because such participation is a form of social capital and also helps enforce social norms elsewhere (this is both an evolutionary and a personal reason).
I’m sorry—in Scenario A (I suffer torture, or duplicate suffers more torture and is killed), I would choose the second option for essentially the reasons you propose. In Scenario B (I’m duplicated, then either I suffer torture or duplicate suffers more torture, then we both live), I would choose the first option because that’s less torture. I don’t see the complexity.
Making sure I was understood correctly: after the clone is killed (which happens in both scenarios), the only identifiable “you” is the survivor. Therefore any considerations of lasting trauma should apply to the survivor, or not all. So to minimize trauma (without minimizing the amount of torture), we should ensure that the survivor—who is known even before the clone is killed—is the one who is not tortured.
I was under the impression that the clone survived in the second scenario—that that was the difference between the two scenarios. This might explain some confusion about my answers, if this was a confusion.
I’m confident it doesn’t actually work the way you’re supposing.
How do you think it does work? That is, are you suggesting there is a thread of consciousness that sometimes works differently from how I’ve experienced it so far? I haven’t seen a good model so far.
I’ve read Dennett’s book. It does a good job of deconstructing and disproving existing models, but I don’t remember that it proposed a good new model, just some interesting ideas and pointers.
Meanwhile, your model:
I at every instant am identical to this particular configuration of particles
has its own share of problems. For instance, you have no idea how many configurations identical or epsilon-similar to yours exist elsewhere at any given moment. You can’t know when they’re created or destroyed or modified. How can you not privilege the pattern-instance that right now is posting on LW, if you have no idea if it’s the only instance or one of a million, the others being clones I just created in my basement?
How can you not privilege the pattern-instance that right now is posting on LW, if you have no idea if it’s the only instance or one of a million, the others being clones I just created in my basement?
Okay, I’ll grant you that I privilege the one I am at the moment, but the nine hundred ninety nine thousand nine hundred and ninety nine duplicates will each privilege themselves—and if I knew that they would be created in advance, I would be concerned for what they would experience for the same reason I care about any other future experience of mine.
Would you still say yes if there was more than 10 seconds between copying you and killing you—say, ten hours? Ten years? What’s the maximum amount of time you’d agree to?
So ten seconds isn’t enough time to create a significant difference between the RobinZs, in your opinion. What if Omega told you that in the ten seconds following duplication, you, the original RZ, would have an original thought that would not occur to the other RZs (perhaps as a result of different environments)? Would that change your mind? What if Omega qualified it as a significant thought, one that could change the course of your life—maybe the seed of a new scientific theory, or an idea for a novel that would have won you a Pulitzer, had original RZ continued to exist?
I think the problem with this scenario is that saying “ten seconds” isn’t meaningfully different from saying “1 Planck time”, which becomes obvious when you turn down the offer that involves ten hours or years. Our answers are tied to our biological perception of time—if an hour felt like a second, we’d agree to the ten hour option. I don’t think they’re based on any rational observation of what actually happens in those ten seconds. A powerful AI would not agree to Omega’s offer—how many CPU cycles can you pack into ten seconds?
I don’t quite understand the idea that someone who accepted the original offer (timespan = 10 seconds) would turn down the offer for any greater timespan. Surely more lifespan for the original (or for any one copy) is a good thing? If you favor creation of clones at cost of your life, why wouldn’t you favor creation of clones at no immediate cost at all?
I like your point. I think I would accept such an offer with a greater time span, if N was > 1, if I knew how long I had and if I could be with my copies.
The psychological stress of anticipating dying wouldn’t be worth it to me for just N=1
Not knowing when that one would die (only that he would) would be too psychologically stressful to be worth it.
The one of me with an expiration date would live his remaining time differently than those who kept going. The ones of me who kept going would do things to honor him and fulfill his needs. The doomed one would expect them to do this.
For longer expiration dates, we collectively would need greater compensation (more copies) to make it worth it.
I’m really surprised that so many people here think this way. Even a bit shocked. (Especially pengvado with N=1(!)). Which on the whole is a great experience :-)
Could someone try to explain to me, please, why you feel this way? How you came to feel this way? I have never seen any reason to stop thinking in terms of a thread of consciousness.
Feel probably isn’t the right word here. I’m sure we all feel the same kind of irreducible subjective thread of consciousness that you do. Just like we all have the same illusions of free will and objective morality. But on careful reflection, taking into account all the science you know, and performing a few relevant thought experiments, it gradually becomes clear that these folk concepts just don’t make sense. The species-typical intuitions don’t go away; you just learn to stop trusting them.
Free will and objective morality are claims about how the universe works, objectively. On reflection it becomes clear that they are contradictory and false, respectively.
But the subjective thread of consciousness isn’t a fact about the universe. It’s a fact about my experience. It makes no sense to say, as you seem to be suggesting, “I may feel conscious, but really it’s an illusion”. Because if I deny it, then the whole concept of feeling is undefined, and consequently, the concept of illusion is undefined. The idea of illusions, after all, implies that we might instead experience or believe something else which is not an illusion but is true.
You can’t claim that subjective thread consciousness is “wrong” because it’s not an objective, empirical claim. We can imagine experiencing counterfactuals, but what would it be like not to have experiences? It’s not a meaningful question, so there’s no answer.
What I was asking is how, due to objective, physical events (you had ideas, read books...) you came to adopt this belief—although I don’t quite understand the belief yet, either.
Just look at all this “reality” business as a framework for understanding experience: how do you know that “reality” is “out there”? Why do you believe such claims? You are not entitled to your subjective experience, no more than to believing that Venus the planet is a goddess of love and beauty.
I’m going to pull my parenthetical about spells of unconsciousness out in a separate comment, because the point seems to have been lost in the course of the discussion:
DanArmak, you seem to propose that the key process which defines the identity of a person at one time with a person at the other time is the thread of consciousness trailing through spacetime from one to the other. How does your model deal with human beings—individual human organisms! - which undergo literal loss of consciousness? I do not refer to sleep, but to the actual shutdown of conscious perception, such as occurred to Jo Walton (papersky) when she hit her head and (I have heard) to many people under general anesthesia. In these cases, the persons describing their memories explicitly state that there is a finite period of time during which no conscious recollection occurs. Does that imply that the speculative fiction writer named Jo Walton living in Montreal on February 20th, 2006 is a different individual than the speculative fiction writer named Jo Walton living in Montreal on February 22nd, 2006? If not, why not?
To address the loss of consciousness scenario: I can’t speak from experience, and as I said, my theory is not formal and strict and provable enough to be sure of things outside my experience.
The basic problem here is discontinuity. If the loss of consciousness is brief enough (a fraction of a second) and does not affect my future mental processes, it seems likely “I” will continue to exist. Any other boundary (of length and severity of unconsciousness) would be arbitrary, and so unlikely.
But my experience is discrete. I always have exactly one experience at a time (if conscious), so I always experience being “me”, not three-quarters me. “Me” is whatever “I” experience at the time :-) This is no good as a definition, but it’s a description every human understands. How to reconcile the two? I have no clear idea.
As I said before, my theory is far from complete—it’s more a list of facts than a structured model. It only describes those things that happen in typical human life. It may not be extensible to events like loss of consciousness, let alone cloning. In fact I’ve been pretty much convinced by this whole thread that my naive model probably can’t be fixed and extended to describe the entire space of physical and experential possibilities. I’ll drop it happily for a better alternative—please give me one!
Any new theory has got to include the fact that I have actual experiences of being me. The theories being proposed here of anticipating equally to “become” one of my future clones, smack to me of just doing away with the conception of anticipation entirely.
I think we have, at least in sketch form—here’s Alicorn’s nutshell summary, and here’s mine. Both of our theories, if they are distinct, fit this intuition of yours—that a person is not destroyed and a new person created after a spell of unconsciousness—better that the thread of consciousness approach.
As for the rest, quite frankly you should expect to get weird results in weird situations like duplication. One weird result I expect is that, if you are duplicated, there will be two people afterwards, both of whose experiences suggest that they are DanArmak.
I took the time to think all this through before replying. I think I grok now your and Alicorn and the other posters’ theory(s). And I pretty much accept it now. Thanks for your explanations.
The problem with my old approach, as I now see it, is the impossibility of empirically distinguishing it from infinitely many other possible theories. In such a situation, it is indeed best to choose an approach that optimizes outcome over all my configuration-descendants, because I might subjectively become any of them.
Of course, if I give up personal continuity, then the above statement becomes merely “because each of them will have memories indicating it is my descendant”. But I am forced to this point of view due to the apparent impossibility of describing a personal continuity in terms of physics, which does not break down in the face of (arbitrarily short) lapses of consciousness.
Thanks again to everyone else who participated and helped convince me.
I don’t see any reason to privilege the thread of consciousness—I’m confident it doesn’t actually work the way you’re supposing. My personal instinct is that I at every instant am identical to this particular configuration of particles, and given that such a configuration of particles will persist after the experiment (though on the other side of the world), it doesn’t seem particularly as if I’ve been killed in any permanent way. (I’m fairly sure I couldn’t collect on my estate, for example.) Sure, it’s risky, but if sufficient safeguards are in place, it’s teleporting, as pengvado said (?).
A note: even if I hadn’t had this instinct before, the idea of a persistent and real thread of consciousness is brought into doubt in a number of ways by Daniel Dennett’s revolutionary work, Consciousness Explained. My copy is on my shelf at home at the moment, but Dennett explains several instances in which the naive perception of consciousness is shown to be unreliable. I don’t think it’s a valid marker to use to identify identity.
(Besides, what of spells of unconsciousness? Should someone whose thread of consciousness is interrupted be considered to have been literally killed and reborn as a facsimile?)
I offer you a choice: either you suffer torture for an hour. Or I create a clone of you, torture it for ten hours, and then kill it. In the second option, you are not affected in any way.
From what you’ve said, I gather you’ll choose the first option. You won’t privilege what you actually experience. But… I truly cannot understand why.
It’s often useful to think as if you’re deciding for all copies of yourself. You can maximize each copy’s expected outcome that way in many situations. You can also optimize your expected experience if you don’t know in advance which of ten thousand copies you’ll be. This kind of argument has often been made on LW. Perhaps I mistook some instances of arguments like yours, which truly don’t privilege experience, for this milder version (which I endorse).
You’re begging a very important question when you use “you” to refer only to the template for subsequent duplication. On my wacky view, if you duplicate me perfectly, she’s also me. If it’s time T1 and you’re going to duplicate Alicorn-T1 at T2, then Alicorn-T1 has two futures—Alicorn-T2 and Alicorn-T2\* - and Alicorn-T1 will make advance choices for them both just as if no duplication occurred. If you speak to Alicorn-T1 about the future in which duplication occurs, “you” is plural.
Your “wacky view” sounds quite similar to mine—I would be interested to read that thesis when it is published.
The ‘you’ I used referred only to the pre-duplication person, who is making the choice, and who is singular.
The view you describe is the view I described in the last paragraph of my previous comment (the one you replied to). I understand and agree that you decide things for all identical copies of you who may appear in the next second—because they’re identical, they preserve your decisions. But you can only anticipate experiencing some one thing, not a plurality.
If someone creates a clone (or several) of me, I may not even know about it; and I do not expect to experience anything differently due to the existence of that clone.
If someone destroys my body, I presume I’ll stop experiencing, although I have no idea what that would be like.
By inference, if someone creates a precise clone of me elsewhere and destroys my body at the same moment, I won’t suddenly start experiencing the clone’s life. I.e., I don’t expect to suddenly experience a complete shift in location. Rather, I would experience the same thing (or lack of it) that I would experience if someone killed me without creating a clone.
Yes, this begs the question of why I experience continuity in this body, since physics has no concept of a continuous body. And why do I experience continuity across sleep and unconsciousness? I don’t have an answer, but neither do the alternative you’re proposing. The only real answer I’ve seen is the timeless hypothesis: that at each moment I have a separate moment-experience, which happens to include memories of previous experiences, but they are not necessarily true—they are just the way my brain makes sense of the universe, and it highlights or even invents continuity. But this is too much like the Boltzmann’s Brain conjecture—consistent and with explanatory power, but unsatisfying.
“You” sure seemed like it referred to only one of the postduplication individuals here:
(This when one of me seems to be quite seriously affected, in that you plan to torture and kill that one.)
And here:
(This when I do privilege what I actually experience, and simply think of “I” in these futures as a plural.)
(But I’ll be all of them! It’s not as though 9,999 of these people are p-zombies or strangers or even just brand-new genetically identical twins! They’re my futures!)
No, I wouldn’t. I’d choose the second option so as to prevent my torture from being compounded with my total death.
Er, the second option is the one where I kill the clone. In both options, only you remain alive after 10 hours, no clone.
How about this cleaner version: I create a clone of you (no choice here). Then I torture you for an hour, OR the clone for ten hours, after which you’re both free to go.
I’d choose one hour I think.
How about this: choose between 59 minutes of torture for you and 10 hours for the clone, vs 1 hour for you and the clone, with the experience for both you and the clone being indistinguishable for the first 59 minutes.
If you choose the 59min/10hrs, what’s going through your mind in minute 59 ? Is it “this is all about to stop, but some other poor bastard is going to have a rough 9 hours” ? Or is it “ohgodohgodohgod I hope I’m not the clone” ?
I’d choose one hour also.
In your new formulation, I’d choose the 1 hour for both of us and we (both copies of me) would both expect it to be over soon at the 59th minute. My copies and I would be in agreement in our expectations of each other’s behavior.
I identify closely with anything sufficiently similar to me—including close past and future versions of me. For instance, if there was a copy of me made an hour ago (whether or not the copy had runtime during that hour), and he or I were given the choice during your test, we would choose the same thing, as mentioned above.
It’s true that both the original, and the clone, don’t know if they’re the clone or not at minute 59. But the original, who really made the decision before the clone was created, correctly optimized his own future experience to have 59 minutes of torture instead of an hour. The original doesn’t care about the clone’s experiences. (That is, I wouldn’t care.)
Sounds consistent. Forgive me if I probe a bit further: I’m not trying to be rude, I’m interested in the boundaries of your theory.
In an unconsciousness—clone—destroy original brain—wake scenario, do you anticipate surviving ?
In an unconsciousness—clone twice—destroy original brain—wake scenario, do you identify with / anticipate being zero, one or two of the clones ?
I changed my mind since then. So I would make different decisions now… more in line with what others here have been proposing.
I would try to optimize for all projected future clones. But in a scenario where I know some clones are going to die no matter what they do (your previous question), I would partially discount the experiences such clones have before they die and try to optimize more for the experiences of the survivor. That’s just my personal preference: the lifelong memory of the survivor matters more than the precise terminal existence of the killed clone.
Regarding your new questions about anticipation, under the new theory that has no concept of personal continuity, there doesn’t seem to be such a thing as personal survival where duplication&termination are involved.
...I see. That doesn’t change my answer, as it happens; my clone dies, yes, and you bear moral culpability for it, but it is better for one to make it out (relatively) unscathed than for the only survivor to be traumatized. In the new version, I would prefer there be only one hour of torture between us, and accept the first option.
See, the thing is: in my utility function, I don’t have a special ranking for “my” experiences over everyone else’s. When I do the math, I come out paying a lot more attention to my own situation for purely pragmatic reasons.
Even given your theory and your utility function, I don’t see how your clone’s 10 hours of torture and subsequent death would leave you traumatized. Isn’t it best for the survivor to have experienced no torture at all (so we’d torture the clone)?
Also, regarding Scenario B: imagine that I decided to get in on it, only with a variation. Instead of duplicating you and offering those two options—either your original be tortured for an hour or your duplicate be tortured for ten—I created an entirely new individual, no more similar to you than any other human being, and gave you the choice between being tortured yourself for an hour and the new guy being tortured for ten.
Which would you choose? Me, I think it’s perfectly obvious that it is better for less torture to occur.
Better for whom? To me it’s perfectly obvious that it’s better for me to have someone else tortured, and I would choose that. I would only choose to be hurt myself if the tradeoff was very unequal (a speck in the eye for me, ten hours of torture for him), and even then I would soon stop agreeing to be hurt if I had to face such a choice repeatedly.
If someone were to mount a campaign to stop your entire torturing project, and if I could participate by being hurt (but not by endangering my life), then I would agree to pay a much higher price. But that’s because such participation is a form of social capital and also helps enforce social norms elsewhere (this is both an evolutionary and a personal reason).
I’m sorry—in Scenario A (I suffer torture, or duplicate suffers more torture and is killed), I would choose the second option for essentially the reasons you propose. In Scenario B (I’m duplicated, then either I suffer torture or duplicate suffers more torture, then we both live), I would choose the first option because that’s less torture. I don’t see the complexity.
Making sure I was understood correctly: after the clone is killed (which happens in both scenarios), the only identifiable “you” is the survivor. Therefore any considerations of lasting trauma should apply to the survivor, or not all. So to minimize trauma (without minimizing the amount of torture), we should ensure that the survivor—who is known even before the clone is killed—is the one who is not tortured.
I was under the impression that the clone survived in the second scenario—that that was the difference between the two scenarios. This might explain some confusion about my answers, if this was a confusion.
No, the scenarios I originally proposed were:
A clone is made.
2a. If you choose, you’re tortured for an hour.
2b. Or if you so choose, the clone is tortured for ten hours.
3, Ten hours from now, the clone is destroyed in any case.
How do you think it does work? That is, are you suggesting there is a thread of consciousness that sometimes works differently from how I’ve experienced it so far? I haven’t seen a good model so far.
I’ve read Dennett’s book. It does a good job of deconstructing and disproving existing models, but I don’t remember that it proposed a good new model, just some interesting ideas and pointers.
Meanwhile, your model:
has its own share of problems. For instance, you have no idea how many configurations identical or epsilon-similar to yours exist elsewhere at any given moment. You can’t know when they’re created or destroyed or modified. How can you not privilege the pattern-instance that right now is posting on LW, if you have no idea if it’s the only instance or one of a million, the others being clones I just created in my basement?
Okay, I’ll grant you that I privilege the one I am at the moment, but the nine hundred ninety nine thousand nine hundred and ninety nine duplicates will each privilege themselves—and if I knew that they would be created in advance, I would be concerned for what they would experience for the same reason I care about any other future experience of mine.
Would you still say yes if there was more than 10 seconds between copying you and killing you—say, ten hours? Ten years? What’s the maximum amount of time you’d agree to?
...no, I don’t think so. It would change what the original RobinZ would do, but not a lot else.
So ten seconds isn’t enough time to create a significant difference between the RobinZs, in your opinion. What if Omega told you that in the ten seconds following duplication, you, the original RZ, would have an original thought that would not occur to the other RZs (perhaps as a result of different environments)? Would that change your mind? What if Omega qualified it as a significant thought, one that could change the course of your life—maybe the seed of a new scientific theory, or an idea for a novel that would have won you a Pulitzer, had original RZ continued to exist?
I think the problem with this scenario is that saying “ten seconds” isn’t meaningfully different from saying “1 Planck time”, which becomes obvious when you turn down the offer that involves ten hours or years. Our answers are tied to our biological perception of time—if an hour felt like a second, we’d agree to the ten hour option. I don’t think they’re based on any rational observation of what actually happens in those ten seconds. A powerful AI would not agree to Omega’s offer—how many CPU cycles can you pack into ten seconds?
I don’t quite understand the idea that someone who accepted the original offer (timespan = 10 seconds) would turn down the offer for any greater timespan. Surely more lifespan for the original (or for any one copy) is a good thing? If you favor creation of clones at cost of your life, why wouldn’t you favor creation of clones at no immediate cost at all?
I like your point. I think I would accept such an offer with a greater time span, if N was > 1, if I knew how long I had and if I could be with my copies.
The psychological stress of anticipating dying wouldn’t be worth it to me for just N=1
Not knowing when that one would die (only that he would) would be too psychologically stressful to be worth it.
The one of me with an expiration date would live his remaining time differently than those who kept going. The ones of me who kept going would do things to honor him and fulfill his needs. The doomed one would expect them to do this.
For longer expiration dates, we collectively would need greater compensation (more copies) to make it worth it.
For short time spans (1 second), I would accept N=1 for teleportation.
I don’t know if I’ll claim that.