The Hidden B.I.A.S.
It would be a stretch to call this an article, but the answers that can be addressed by the questions it poses are potentially far-reaching with regard to revealing possible reasoning flaws, either in my own philosophy, or perhaps even yours. The flaws under my suspicion are caused by the modularity of the brain’s systems, and the ability to hold to conflicting beliefs when they are not held directly against one another.
These particular ones escape notice, I think, because they tend to only be given reflection in specific situations; my thought experiment here should help to hold them near each other.
The Setup: Julian finds himself in the waiting-room of the Speedy-dupe office. Beyond that waiting room are three isolated rooms (P, Q, and R). Anyone who walks into Room P, which contains the Speedy-dupe device, will be scanned down to the most exact level imaginable, causing them to lose consciousness. Anyone who has used the Speedy-dupe will remember everything up until the point they entered the waiting-room, and begin forming new memories within seconds after regaining consciousness.
Situation 1:
If Julian walks into Room P, and the Speedy-dupe runs, and then Julian walks out of Room P, and also another Julian walks out of Room Q, which is the “original” Julian? What makes Julian-P more original than Julian-Q?
Possible Answers 1:
You probably would say that Julian-P is the original Julian, due to your prior beliefs regarding causality—but how many times have you encountered the Speedy-dupe? For all we know, the person who walks into Room P is vaporized after scanning, and duplicated in Room P and in Room Q. If you still feel that Julian-P is the original, ask yourself what other reason do you have for the way you feel? What is it that you aren’t mentioning?
Situation 2:
If Julian walks into Room P, runs the Speedy-dupe, and Julian walks out of Rooms Q and R, but not out of Room P, which is the original Julian? Why not?
Possible Answers 2:
You might be saying to yourself, “Ah, now, you can’t trick me. Neither of them is the original!” If they are both practically identical copies of the original Julian, what now stops you from identifying the original Julian with his identical copies? Are legal property issues really the only thing stopping you from modifying your views on identity?
Situation 3:
But what about if Julian walks into Room P, is scanned by the Speedy-dupe, and walks out of Room P ten years later? Does that mean it is the “original” Julian?
Possible Answers 3:
Getting increasingly annoyed or bored with these questions, you might retort, “I see what you’re doing, and it’s not going to work. You are obviously anti-cryonics, but you are wrong here. Cryonics in some way preserves the original material, but your Speedy-dupe vaporizes it. The copy which emerges ten years later is not a direct continuation of the original physical material.”
Based on what we’ve already thought about here, is continuation of the original physical material the important thing that counts toward your identifying with your future post-cryonic-revival self? If so, why? If the pattern is recreated precisely (or even well enough) at a temporal or spacial distance from the original, what is actually different between Speedy-dupe and Cryonics?
My Suspicion:
If you answered on a completely different track than the Possible Answers did, just ignore me for now (if you have not already done so). I think that what is lurking beneath most of these typical objections or feelings is actually B.I.A.S.--Belief In A Soul. Despite all scientific evidence, a part of you still believes that each person has some special little spark that goes on after death, that is ultimately the thing that makes you who you are.
Not that the personality that you have has taken your entire life to be shaped by genetics and life experiences imprinted on the blob of cells that eventually grew complicated enough to handle who you are now; but an invisible special material woven by a loving creator, just right for what you were destined to become.
Not that when your body stops, it stops, and that process that you called life is over, whether that filigree of frozen carbon is forced to move a century from now or not; but that the unique thing that is hidden inside of you now will just hang around and gladly jump back in a century from now.
Not that your partner could love your clone and never know the difference, or even just leave you and wind up with someone strikingly similar; but that your two souls were destined to love one another for all eternity.
It’s easy to gloss over all those things, but just because everyone would like it to be that way, doesn’t make it true. If I am clearly Wrong, tell me why I am Wrong, in order that I may be Less so. If not, I hope that this has helped you in Overcoming B.I.A.S.
Credits: The original function and name of the Speedy-dupe come from The Duplicate, a story by William Sleator, my favorite childhood author. (Many of his books combine normal childhood problems with mind-bending philosophical and physical concepts not normally found in youth literature.)
The idea for the multiple rooms came from the episode “The Girl Who Waited” from Doctor Who.
Any other content, if objectionable, can simply be considered personal mind-spew.
Enjoy.
I got a bit bored with your examples (mostly because the replies you offered weren’t mine), but didn’t think you were referring to cryonics until this part.
Think of that “special little spark” as a useful legal fiction—it’s a concept that facilitates laws and informal norms of behavior. It breaks down if you can duplicate people, but that’s not possible yet—when it is, we may have to adjust a lot of our concepts.
Similarly, freedom, justice, truth, democracy, fairness etc. don’t “really” exist any more than the soul does, but can still be useful concepts.
Foma, in other words. The concepts you mentioned are useful because they represent established behavior sets, they are what we make them. A soul is an actual false claim, and only useful when you don’t realize that it is false. I don’t endorse self-deception such as that, it’s a slippery slope from there.
All copies of Julian are the original.
I feel bad, because you’re clearly trying, but still downvoting this for: strange formatting and incorrectly guessing the reader’s thoughts (perhaps unfamiliar with some common positions held here?). I doubt almost anyone on LW will agree with your possible answers.
I don’t fault you for your reasons, as I didn’t add enough disclaimers to earn your forgiveness. If by strange you mean “unconventional” formatting, yes, I am guilty of that. I didn’t feel that I was smart enough to get away with the rigid format usually found here without sounding pretentious. And I’ve seen articles downvoted for more petty reasons than this, so I’ll take it. And think about this:
If someone got every answer on a multiple choice test wrong, that wouldn’t seem suspicious to fail better than chance? What I was trying to show was the “gut instinct” answers, which part of any human might answer before their executive functions override it. I am aware that most readers here are more advanced than me, and that there are a few things that, despite reading all the Sequences, I must still be missing. I am trying to find out what I’m wrong about.
There is nothing really wrong or missing with the viewpoints expressed in this post at all. Don’t be too discouraged.
it’s just getting down-voted because no one on lesswrong suffers from B.I.A.S. in the first place and so they found it boring or unhelpful, the sort of thing you might ponder in some philosophy 101 college course. You simply underestimated the average reader of this forum. No fault to you—certainly the general population does suffer from B.I.A.S.
(clever acronym, btw :)
Personally, I don’t think I even alieve in souls as you define them, let alone believe in them. Even at the most basic, gut level, I don’t really think of myself as anything other than a conscious physical system...although I admit that at in some rare moments (like when pondering qualia) the thought still disorients me. I think most people here would say the same. This post basically narrates a struggle with questions which people here have long ago un-asked.
Still, I think that overcoming the mind-body dualism illusion for the first time is a pretty big step. Do you still feel confused / compelled by this illusion on a gut level, despite intellectually disavowing B.I.A.S? Or was this post only intended as a guide for others to get past it?
I think you described it best when you said the issue was “un-asked”. Everybody here may be over it, but that is just the point when it gets the chance to creep back in. It was more like as if I was walking around with a giant “BEWARE!” sign—all the other biases seem to be countered by addressing them, and this looked like a big one that was not often talked about. I figured it would be a good addition to the bias-avoidance toolkit, because if you don’t include it specifically, the next world dictator (human or otherwise) will have a world-class rationality training, and will use that to rationalize the B.I.A.S. that they were never told to avoid.
I personally think that it’s such an ingrained thing that it probably colors my ideas in ways that I am not aware of, so hearing more about it would be helpful to me, if I’m the only one affected. I find that hearing other theories besides my own can help in this situation.
Thanks. It’s not mine—here is the origin of “un-asked” AFAIK http://en.wikipedia.org/wiki/Mu_(negative) I think there are sequences with similar concepts here. Of course, we use it everyday with N/A.
You aren’t the only one affected, but there is a large population of unaffected people on this site. You’ll want to try and overcome B.I.A.S. on a gut level, if possible. I think that the moment you fully understand the relationship between the mind and the rest of the universe, the intuitive preference for dualist thinking goes away. I think for many people there is one moment when it all just clicks into place.
Want to try?
If so, let me first establish what you’re definitions are and make sure that you aren’t confused about any of the important things before proceeding.
1) Universe—deterministic, random or some third thing? Is there even a third option? What is a universe anyway? Is it governed by logic? Can anything not be governed by logic?
2) Free will—Make a coherent definition. What does your answer to the previous question mean for free will? If you prefer to say that there is no free will, explain why (or whether) it feels like you have free will.
Here are more, but don’t answer them yet if you found (1) and (2) difficult.
3) “I think therefore I am”—agree, disagree, or deconstruct?
4) Qualia—why does it happen? What’s subjective experience? Why do you experience things from a specific point of view? Can you be certain that I have qualia?
If you don’t have good answers (or unaskings) for these questions, your B.A.I.S. is most likely due to some logical issue and can be fixed. If you have good answers… I’m not sure what happens then, maybe we look at it from another perspective to convince whatever part of you is still holding out.
If you are interested in playing, answer my questions or unask them. I’ll just keep going till we reach the very bottom, and hopefully at the end you’ll come out free of any alief in mind-body dualism at a gut level. If this actually succeeds, it will be pretty interesting...at worst we’ll waste a bit of time.
So far I’ve played this game one other person. Midway through our discussion he described on instance of depersonalization and it was fleeting and unpleasant. We didn’t continue (loss of interest, time constraints). I was aiming more for ego death, but I guess depersonalization is pretty close. To paraphrase, he said “I just saw things way too clearly for a moment...I just saw myself from the outside. ”
I would very much like to see things way too clearly...
Dealing with the local, classical physics universe that my body’s senses are adapted to perceive, I’d have to go with “third option” in the “time-loaf” sense. I suspect that MWI is true, so yes to random which one this is, but deterministic in its worldline. To me, logic is shorthand for what is actually permissible in nature. We just are not so good at defining the rules yet. Something can only appear to not be governed by logic through lack of proper resolution of the measurements.
I think that any sufficiently detailed understanding of physics renders the existence of person-level free will meaningless. Our savanna-dwelling ancestors had no need for such an understanding. We animals ascribe agency to all kinds of wacky shit, including these bodies. Hence, the ego. I don’t feel like I’m being controlled, because in the macro sense, I’m not. The universe just runs, it doesn’t have feelings or a way of doing anything but what it actually does; and what it actually does determines what I am able to do.
Some people think physics renders FW non-existent, some think it doesn’t. Most of them provide a definition of FW so that you can see how the conclusion is drawn. But you said that physics renders FW meaningless. How does that even work? I read a dfictionary definition, the meaning of the word is not in my mind, …. then someone in a lab makes a discovery, and the meaning disappears.
I will answer your question, but I do not understand your last statement; it looks like you retyped it several times and left all the old parts in.
I meant that with a sufficiently detailed understanding of physics, it would be meaningless to even posit the existence of (strong) free will. By meaningless here I mean a pointless waste of one’s time. I was willing to clarify, but deep down I suspect that you already knew that.
Uh-huh. So “meaningless” means “very false”. Although there are physically based models of Free WIll
I take it that you’re nitpicking my grammar because you disagree with my views.
As for what topic I am talking about, it is this: In the most practical sense, what you did yesterday has already happened. What will you do five minutes from now? Let’s call it Z.. Yes, as a human agent the body and brain running the program you call yourself is the one who appears to make those decisions five minutes from now, but six minutes from now Z has already happened. In this practical universe there is only one Z, and you can imagine all you like that Z could have been otherwise, but six minutes from now, IT WASN’T OTHERWISE. There may be queeftillions of other universes where a probability bubble in one of your neurons flipped a different way, but those make absolutely no practical difference in your own life. You’re not enslaved to physics, you still made the decisions you made, you’re still accountable according to all historical models of accountability (except for some obscure example you’re about to look up on Wikipedia just to prove me wrong), and you still have no way of knowing the exact outcomes of your decisions, so you’ve got to do the best you can on limited resources, just like the rest of us. “Free Will” is just a place-holder until we can replace that concept with “actual understanding”, and I’m okay with that. I understand that the concept of free-will gives you comfort and meaning in your life, but “I have no need of that hypothesis.”
I was (and am now) nitpicking your semantics, in order to establish your meaning.
The fixity of the past does not imply the fixity of the future.
Before or after it happened?
Four minutes from now it might have been. The fixity of the past does not imply the fixity of the future.
Free Will isn’t less important than a practical difference, it is much more important. It affects what makiing a difference is. If FW is true, I can steer the world to a different future. If it is false, i cannot make that kind of difference: in a sense, I cannot make any kind.
Whatever that means.
As you have guessed, lack of accountability (in certain senses) is a key issue in Libertarianism.
That is irrelevant to the existence of FW. Nothing about FW implies omniscience, or the ability to second-guess oneself.
How do you know that hasn’t happened already?
You’re trying to ad-hom me as a fuzzy-minded irratiolanist. Please don’t.
No need, you’re doing a fine job of that all by yourself.
Those are both good, coherent answers. I’ll assume from your lack of comment on (3) and (4) that you currently find them difficult, so I’ll aim in that direction.
Here is the one point I want to expand upon a little. You said that the reason that you “feel” like you have free will is the following:
Now, that explains why we assign souls to other people and to clearly non-conscious things like rain and disease. It also explains much of the belief in God. But does it explain why we assign souls to ourselves? How do you justify to yourself the fact that you can personally feel your thoughts, emotions, and sensory input?
If you think you know the answer to this, or if you think that’s a silly question in the first place, elaborate. If you think it’s a reasonable and difficult question, or if you think the question is unanswerable, we’ll come back to it later.
Oh, also: I believe that this question is the puzzle that lies at the crux of the B.I.A.S. Please elaborate if you disagree with me on that.
---Anyway, moving on.
You didn’t define free will like I asked, but that’s okay—it indicates that you are implicitly using a definition of free will which is impossible in any logical universe, and thus cannot be coherently defined without contradiction. And you correctly perceive that the universe runs, and you and I are portions of that process. So far so good.
A universe of made axioms makes sense, right? You can imagine a deterministic multiverse / random world-line. You can even create some simple universes.
Now answer me this:
Can you imagine creating a person within a consistent set of axioms? I’m not saying that you actually know how to construct a person of course—I’m just asking whether it is an intuitively, gut level truth that a set of axioms could in principle create conscious beings. I know that you already intellectually believe it possible—but do you alieve that it is possible?
Note that I’m not asking whether or not you personally could exist within this set of axioms and still have subjective experience....though we will get to that later. I’m just asking whether or not it’s intuitive that beings similar to you could exist within a set of axioms.
If it is intuitive to you that axioms can construct people, elaborate a little on the very basics outline of how this might be done.
If it’s not intuitive, let me know and we’ll focus on making it so.
I did not comment on 3 and 4 because I thought you wanted to judge first whether I understood the first two.
To me, yes. I think that a theory of mind is ascribed to oneself first, then extends to other people. On a beginner level, developing a theory of mind toward yourself is easy because you get nearly instant feedback. Your proprioception as a child is defined by instant and (mostly) accurate feedback for everything within your local skin-border. After realizing that you have these experiences, and seeing other humans behave just as if they also have them, and being nearly compelled by our wetware to generalize it to other animals and objects, our “grouping objects” programs group these bundles of driving behaviors into an abstract object (which is visualized subconsciously as a concrete object) which we call a soul.
That’s a much more coherent summary of what I meant, yes.
You just said it—”A universe of made axioms makes sense, right?” My existence in a universe shows that it in fact has been done, saving me the trouble of proving how.
I enjoy your conversation, but I’m not particularly on the brink of an existential crisis here. In reference to my article I am simply admitting that I am aware that it is a limitation of the human brain to be guarded against, much like not sticking my hand on a hot stove prevents tissue damage. I don’t expect people to be immune from it, but we’d be better off if we were more conscious of it. Instead I brought on a flurry of angry retorts that amount to “Hey, I’m not subject to fallibility, just who do you think you are accusing me of being human?”
Haha, well it’s only worked on one person so far, I just figured it would be amusing to try. I meant for you to yourself judge if the first two questions were easy or hard and decide whether to do the second ones accordingly—sorry if that didn’t come across. I wasn’t sure where you were coming from philosophically, and maybe the first two questions were trivial to you, so I put some harder ones out there.
I’ll just explain what I was attempting to do here, since it seems you might be getting tired of the whole question—answer thing. In retrospect maybe that was dumb of me—I was mostly curious to see if someone else would independently arrive at my conclusions if asked the same questions, as a way to test the strength of my conclusions.
It’s easy to imagine, from an objective and detached viewpoint, a universe like our own in which people exist. It’s easy to reduce the universe into a set of axioms.
But none of that explains why you are in your body. It’s called the hard problem of consciousness. If a person is still confused about the hard problem, they will continue to instinctively believe in souls—and rightly so, because if one doesn’t see how physical matter could lead to qualia, it’s quite natural to think something unexplainable and mysterious is going on. Really, if you aren’t sure how matter gives rise to qualia, how would you know whether or not some precious link was severed if you, say, teleported by destroying and remaking all your molecules or something?
So what the previous questions have established is that you’re able to do step one (imagine the universe as a set of axioms). The next set was intended to establish step two (figure out how qualia works itself in this picture).
I was hoping if we got past step two, you would’t have B.I.A.S. any longer (although perhaps you’d be confused in an entirely different way). My implicit assumption here is that B.A.I.S. is ultimately a problem of not having dissolved the hard problem of consciousness, rather than an instinctive bias that all humans must have.
thoughts? Should I continue with the next set of questions? Should I just tell you what I think? Or have you already figured this out to your satisfaction? (If so I want to hear your conclusions)
I’m not offended, that’s one of my favorite games. My thought process is so different than my peers that I constantly need to validate it through “coerced replication”. I know I’m on the right track when people clearly refuse to follow my train of thought because they squirm from self-reflection. Yesterday I got a person to vehemently deny that he dislikes his mother, while simultaneously giving “safer” evidence of that very conclusion, because, you know, you’re supposed to love your parents.
Regarding the hard problem of consciousness, I am not even sure that it’s a valid problem. The mechanics of sensory input are straightforward enough; The effects of association and learning and memory are at least conceptually understood, even if the exact mechanics might still be fuzzy; I don’t see what more is necessary. All normal-functioning humans pretty much run on the exact same OS, so naturally the experience will be nearly identical. I have a (probably untestable) theory that due to different nutritional requirements, a cat for example would experience the flavor of tuna almost identically to what we taste sugar as. And a cat eating sugar would taste something like eating plain flour, and catnip would be like smoking crack for humans. The experience itself between different creatures can be one of several stock experiences, brought on by different stimuli, just because we all share a similar biological plan (all animals with brains, for instance).
An experience like an orgasm could be classified to be something like, having a Level 455⁄293 release of relative seratonin and oxytocin levels, whereas eating chocolate causes a Level 231⁄118 in a specific person. If by some chance you measured the next person to have a Level 455⁄293 from eating chocolate, then you know that what they are experiencing is basically equivalent to an orgasm, without the mess. One human’s baseline experience of blue is likely to be very similar to another’s, but their individual experiences would modify it from that point. You know that they experience blue in much the same way that you know they have an anus. It’s a function of the hardware. In some rare cases you might be wrong, but there’s nothing mysterious about it to me.
Go ahead and tell me what your theories are, I’m sure that I’m not the only one listening. Even if we aren’t enlightening anybody, I’m sure we are amusing them.
Apologies for the late response. Grant proposals and exams.
I think the following series of posts really captures how I go about intuitively deconstructing the notion of “individual”.
EY discusses his confusion concerning the anthropic trilemma and I think his confusion is a result of implicit Belief In A Soul, and demonstrates many similarities to the problems you outlined in your post. KS tries to explain why this dissonance occurs here and I explain why dissonance need not necessarily occur here in the comments.
To summarize the relevant portions of this discussion, EY(2009) thinks that if you reject the notion that there is a “thread” connecting your past and future subjective experiences, human utility functions become incoherent. I attempt to intuitively demonstrate that this is not the case.
Hopefully people will weigh in on my comment over there, and I can see if it holds water.
As I read the “Anthropic Trilemma”, my response could be summed up thus: “There is no spoon.”
So many of the basic arguments contained in it were defined by undefined concepts, if you dig deep enough. We talk about the continuation of consciousness in the same way that we talk about a rock or an apple. The only way that a sense of self doesn’t exist is the same way that a rock or apple don’t exist, in the strictest technical sense. To accept a human being as a classical object in the first place disqualifies a person from taking a quantum-mechanical cop-out when it comes to defining subjective experience. People here aren’t saying to themselves, “Huh? Where do you get this idea that a person exists for more than the present moment?? That’s crazy talk!” It’s just an attempt to deny the existence of a subjective experience that people actually do, um, subjectively experience.
Well, if you duplicate an apple (or even another person) there is never any confusion of which one is “real”. They are both identical duplicates.
However, when you talk about duplicating yourself, all these smart people are suddenly wondering which “self” they would subjectively experience being inside. And that’s pretty ridiculous.
So you need to point out that the self doesn’t really exist over time in the strictest technical sense, in order to make people stop wondering which identical copy of their subjective “self” will end up in.
These questions don’t make sense because In the same way that you can’t subjectively experience other people, you can’t subjectively experience yourself from the past or the future.
Well-said. Thank you.
I may just be speaking for myself here, but...
More disclaimers would not really help, it just makes it harder to read. If you’re adding more than one or two disclaimers to your writing, maybe it needs a rewrite more than it needs more disclaimers. If you reframed this as a conversation between two parties and didn’t imply that anyone on LW necessarily agreed with one or the other, that might help. E.g., I found the jump to cryonics in the OP very jarring, but I could much more readily believe that someone might make that leap than that most people would.
Anyway, don’t be too discouraged by the poor reception this piece got. It’s difficult to write anything that will pass the LW crowd unscathed.
Thanks for the critique; the dialog format would have been much more appropriate. I am actually surprised how few karma I’ve lost over my first article. I was fully expecting a −100 or so.
I don’t see any strange formatting. I guess it doesn’t really have obvious headers, but that’s about it.
I originally thought the downvoting was excessive, but I agree with lavalamp’s point about the questionable guesses of what reader responses will be. On the formatting, for me, spacing between paragraphs is inconsistent (sometimes there, sometimes not). I don’t know if that’s what lavalamp meant.
Regarding the questionable answers, I purposely got all those answers wrong to show what a “typical” guess might be, not the prevailing LessWrong opinions. I thought it was obvious enough not to point it out, and there was where I was mistaken. Sorry.
I am pretty sure most people here consider the question about “original” person after person-duplication confused, arbitrary, uninteresting, unimportant, meaningless or any combination thereof. Legal issues can be resolved by new laws and apart from that, what is “originality” good for?
Relevant link to SMBC.
Oh good, you did understand what I was getting at.
The use of scare quotes here is very suggestive. It shows that you realize that you are not talking about a solid concept. I strongly suggest that you try tabooing the word “original”, and see were you can go from there.
I used quotes because not only is it not a solid concept, it’s not even a valid one. The point was that to think that way betrays an a priori belief in a soul.
Define “soul”.
If they’re supernatural entities, I don’t believe in them. If they’re just whatever makes you a person, I believe in them. I believe they can be created, destroyed, modified, and copied. The Julian who walks into the room is not the one who walks out of any of them, and is equally similar to all the ones who walk out, but that doesn’t mean there aren’t souls involved.
Before anyone starts talking about what is and isn’t supernatural, I’m using it to mean fundamentally mental. You have a soul, but it’s made of atoms interacting the way you’d expect them to interact, so it’s not supernatural.
By soul in this article, I mean a supernatural extra object, but I am aware that many people here rationally reject that notion. What I was trying to get at was that even though we understand that it is not true, many hidden thought processes take the existence of a (supernatural) soul for granted.
I am curious about the opinions of other people here about what actual physical processes would comprise a non-supernatural soul. If I replaced all of my insides with such advanced electronics that nobody else would notice a difference (without a medical examination), does that mean I have the same soul? If I had to define a soul it would be “whatever it takes internally (mentally and physically) to cause an identical outward behavior as observed in a certain point in time”. That satisfies the conditions of your definition, as well, although it is not restricted to “people”, but actually comes closer to defining what qualia actually are.
I believe personal identity is an illusion. The other soul would have the same memories as you, and would feel the same from the inside. It would still be a different soul. Exactly the same would be true if you waited a split second.
I agree. It would be easier if only it weren’t such a powerful illusion.
I would guess that many people here disagree with that assessment.
Not much. Both are processes that send a snapshot of the physical implementation of all the algorithms that are collectively called a person/”soul” through time or space.
I had read that article, which this one was supposed to be a sort of follow-up to. Many people here may disagree with my example answers intellectually, but like the Zombie article points out, that doesn’t stop the false intuition that it is so.
Which brings me to the very subject that I hoped to discuss: Why would you or I care whether we get revived one hundred years from now? Reading on this forum, I feel like I should care, but for some reason I don’t. Reproducing a similar version of my wavefunction from second to second takes considerably less effort and resources, and I think that’s the process that we intuitively care about. That’s an easy place for me to draw the line between what I consider “me” and “not-me”. What are your personal feelings about identity?
You wrote this LessWrong post about cryonics being a good idea under the assumption that your readers would disagree with an argument from the core sequences which is usually used to support the “cryonics is a good idea” conclusion on LessWrong? To each his own.
Here are the real/hypothetical cases that mostly formed my answer to your last question:
If you were to replace every neuron in your brain with a robotic cell exactly simulating its function, one neuron at a time and timed such that your cognition is totally unaffected during the process, would this cause you any doubts about your identity?
Why doesn’t the interruption in your conscious experience caused by going to sleep make you think you’re “a different person” in any sense once you wake up, keeping in mind that a continuous identity couldn’t possibly have anything to do with being made of the same stuff? What about when people are rendered temporarily unconscious by physical trauma, drugs, or other things that the brain don’t have as much control over as sleeping?
Does this mean that I should not fear death, because since I can in principle be exactly reproduced, it is not fundamentally different from sleep? In a classical sense, it is this body that I actually care about preserving, not my pattern of consciousness—that’s where the fear of death is coming from. And deeper, it is really my body that cares about preserving my body—not my consciousness pattern. So the problem that I am having trouble wrapping my head around is that statistics alone makes recreation of my pattern of consciousness likely; cryonics doesn’t really add much more likelihood to it, in my opinion. At whatever point in the future that I am recreated by mere chance or simulation, that will be the next time “I” exist, whether it’s a billion years from now, on another planet, or another universe. Neither does it stop me from dying, so what is the actual point of cryonics, since it seems to not satisfy either of its purposes?
Preserving that information makes it much more likely you’ll be reproduced accurately and in a timely manner and in a situation you would be able to enjoy, rather than in twenty quintillion years because of quantum noise or some such. Part of the point of preserving your state until it can be transferred to a more durable artifact is that there’s some chain of causal events between who you were when your state was recorded, and who “you” are when that state is hopefully resumed; many people seem to value that quite a bit. You should try to avoid death regardless of your beliefs about cryonics, identity, or just about anything else.
That’s a helpful, honest answer, thanks. I have a lot of empathy, but basically no sympathy in my programming. Unfortunately this extends even to my regard for my future selves. I try to avoid death in the moment and the near future, I don’t seem even to identify with my future self. So hearing something like “Well, most other people would want so and so, now you know,” at least helps me understand humans.
What would really suck is another me walking through the door, informing me “Guess what! A benevolent alien-god duplicated you, I mean me, and granted me practical immortality in this new body! I won the Galactic Lottery! Isn’t that great news, you (I)’ll beat death! I’ll see you (me) around, I’ll even attend your (my) funeral.”
EDIT: Let’s settle this wedrified, once and for all! With a poll, I mean:
Copy walking in as described in the scenario above. Would that really suck? [pollid:395]
Really suck? No! That’s fantastic news. No bad thing has happened and you also discovered something awesome happened. (Envy is irrational in this case.)
How bad do things have to get for you here, before you’re allowed to envy your other self?
Let me try to express that again since intuitions differ regarding concepts like envy, jealously and spite. What is irrational is considering a positive outcome for something you identify with (your clone) that has no known negative effects on anything you care about to be a net bad thing.
If Kawoomba actually has the preferences he implies and those preferences are even remotely internally consistent he will either:
Pay money to not be cloned at all. (This interpretation seems unlikely and would make the whole comment misleading.)
Pay money for clones of himself to not experience any significantly desirable outcomes. (This seems batshit insane.)
Pay money to never meet his better-off clones.
Yes. Just like in principle I’d wish everyone to be well off and swimming in resources.
However, I (due to envy, jealousy and spite) would prefer not to be the only poor guy living in such an enclave of millionaires. This is irrational because a resource-rich environment would benefit a poor-me more so than an environment in which all others are around my same socio-economic stratum.
(In a trade off, when given the choice of elevating everyone around me—excluding friends and relatives—but myself, I’d do it, but when given the choice I’d rather do so with strangers I do not encounter daily, as opposed to strangers that I do.)
People who live off of welfare in many European countries have a vastly better standard of living than many of the feudal lords in medieval times. Yet, they define their standard of living relative to their peers, and will feel much worse off than some medieval baron. Tyranny of relativity.
Also, re: wedrified “if my preferences are even remotely internally consistent”, well, they are not. Are yours? (Not: do you want them to be?)
Answered here (just so you don’t miss it).
Breaking my own rule of not fighting the hypothetical, it surely would. That better-off clone could well replace me in many roles, and generally take my little niche in my environment. But we should assume there are no known negative effects, in which case you’d be right, of course. So now that we’ve established that the is-state of me is irrational (as opposed to the ought-state), quo vadis? :)
Interesting. I added a poll to the grandparent. Also, let’s keep in mind that we’re not talking from the position of a perfectly rational agent, or version of yourself, but from your/my position. As such, while feelings of envy may be irrational, that does not mean that our current selves would not experience them.
If I believed immortal-dupe-me, it’d be really awesome. I mean, it would be a high-utility outcome. I’d be envious—I’d be envious of anyone given immortality if I don’t get it too. But I’d vastly prefer that outcome to no-duplication, even after it was clear I was mortal-dupe-me. If one person was going to get immortality I’d rather it was a duplicate of me than anyone else except a tiny number of my nearest and dearest.
Pre-duplication me is the same as mortal-dupe-me and immortal-dupe-me, but the two afterwards are not the same person. I’d rather be immortal-dupe-me than mortal-dupe-me (hence envy) but I’d rather immortal-dupe-me existed than didn’t.
We could have some real fun together, for as long as I (mortal-dupe-me) has left. For one thing, we could do a lot of really cool practical research in to immortality and benevolent alien-gods.
Which leads me to my main point, which is that all this is perhaps besides the point. If someone looking just like me walked through the door and gave a speech like that, I simply wouldn’t believe them. There are loads of possibilities to explain that situation that don’t require such wholesale abandonment of science-as-we-know it: practical joke, previously unknown twin, hallucination/dream, etc. It’d probably throw me off a bit, but I like to imagine I wouldn’t jump to such a wild conclusion on such a flimsy pretext. Or, put another way, my prior for the existence of interventionist benevolent alien-gods is very, very low indeed.
EDIT: Just concerning the main point / last paragraph:
Don’t fight the hypothetical, otherwise the answer to any kind of Parfit’s Hitchhiker or Newcomb’s Problem class quagmire would be “I’ve probably been duped, there is no such Omega, and if there is, it ain’t offering boxes.”
Don’t accuse people of fighting the hypothetical, when in addition to questioning if anything like that would really happen, they also respond to the hypothetical as stated.
He said it was his main point, that’s what I responded to because I had nothing to add to his other remarks, which “[may be] besides the point”. Too easy to get off track with hypotheticals, and for a newcomer I thought it might be worth the links.
A problem with training narrow rationality skills is that without training balancing skills to a similar degree, you end up overapplying it. The classic example is that if know how to recognize biased reasoning, but you don’t know to (or fail to, despite knowing) apply the same level of scrutiny to arguments you like as to arguments you don’t like, then every bias you know about makes you stupider.
You may have an overdeveloped sense of “don’t fight the hypothetical” that needs some balance from attention to what questions are important, what answers are applicable in real life. Doug’s response to your hypothetical, which fully addressed the underlying philosophical question, combined with an evaluation of how realistic the scenario is, was a very nice answer, regardless of which part of it was labelled as the main point. Your criticism that it was fighting the hypothetical was just wrong.
Bluntly telling them a newcomer they wrong when they happen to be right, and giving a bunch of links so they have to read 3 articles, 2 of them not even relevant, to understand the criticism you are trying to make, is not an effective strategy for community building. (Giving links to interested newcomers can be good, but it should not be confrontational.)
You are reading a whole lot into very little. I’m tapping out, but am available via PM.
I’d at least be happy for my clone, because if I am supposed to love my family and offspring as normal people do, I should also love someone who shares 100% of my genetic plan, so I should be glad that someone on “Team MaoShan” got a good result. In fact, I used to use this argument to justify playing the lottery, in the sense that me losing meant that another version of me in the multiverse just did win, so I should be almost as happy. That was before I started using that money to purchase an equivalent amount of chocolate every week.