Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death.
You seem to think that creating a description of the structure of a brain is necessarily a destructive process. I don’t know of any reason to assume that. If a non-destructive scan exists and is carried out, then there’s no “death”, howsoever defined. Right?
But anyway, let’s grant your implicit assumption of a destructive scan, and suppose that this process has actually occurred to your brain, and “something that functions like [your] brain” has been created. Who is the resulting being? Who do they think they are? What do they do next? Do they do the sorts of things you would do? Love the people you love?
I grant that you do not consider this hypothetical being you—after all, you are hypothetically dead. But surely there is no one else better qualified to answer these questions, so it’s you that I ask.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.
Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things. Convincing it that it has the same identity as a dead person is just one among many strange tricks you could play on it.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
Fair enough.
The resulting being, if possible, would be a being that is confused about its identity. [...] Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things.
I’m positing that the being has been informed about how it was created; it knows that it is not the being it remembers, um, being. So it has the knowledge to say of itself, if it were so inclined, “I am a being purposefully constructed ab initio with all of the memories and cognitive capacities of scientism, RIP.”
Would it be so inclined? If so, what would it do next? (Let us posit that it’s a reconstructed embodied human being.) For example, would it call up your friends and introduce itself? Court your former spouse (if you have one), fully acknowledging that it is not the original you? Ask to adopt your children (if you have any)?
It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.
I suppose, to put myself in that situation, I would, willpower permitting, have the false memories removed (if possible), adopt a different name and perhaps change my appearance (or at least move far away). But I see the situation as unimaginably cruel. You’re creating a being—presumably a thinking, feeling being—and tricking it into thinking it did certain things in the past, etc, that it did not do. Even if it knows that it was created, that still seems like a terrible situation to be in, since it’s essentially a form of (inflicted) mental illness.
!!… I hope you mean explicit memory but not implicit memory—otherwise there wouldn’t be much of a being left afterwards...
“tricking” it into thinking it did certain things in the past
For a certain usage of “tricking” this is true, but that usage is akin to the way optical illusions trick one’s visual system rather than denoting a falsehood deliberately embedded in one’s explicit knowledge.
I would point out that the source of all the hypothetical suffering in this situation would the being’s (and your) theory of identity rather than the fact of anyone’s identity (or lack thereof). If this isn’t obvious, just posit that the scenario is conceivable but hasn’t actually happened, and some bastard deceives you into thinking it has—or even just casts doubt on the issue in either case.
Of course that doesn’t mean the theory is false—but I do want to say that from my perspective it appears that the emotional distress would come from reifying a naïve notion of personal identity. Even the word “identity”, with its connotations of singleness, stops being a good one in the hypothetical.
Have you seen John Weldon’s animated short To Be? You might enjoy it. If you watch it, I have a question for you: would you exculpate the singer of the last song?
I take it that my death and the being’s ab initio creation are both facts. These aren’t theoretical claims. The claim that I am “really” a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn’t amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being’s behaviour, its ability to fool others, and its own confused state doesn’t make any difference to the argument. It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I would also point out that there are people who are quite content with severe mental illness. You might have delusions of being Napoleon and be quite happy about it. Perhaps such a person would argue that “I feel like Napoleon and that’s good enough for me!”
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
I take it that my death and the being’s ab initio creation are both facts.
In the hypothetical, your brain has stopped functioning. Whether this is sufficient to affirm that you died is precisely the question at issue. Personally, it doesn’t matter to me if my brain’s current structure is the product of biological mechanisms operating continuously by physical law or is the product of, say, a 3D printer and a cryonically-created template—also operating by physical law. Both brains are causally related to my past self in enough detail to make the resulting brain me in every way that matters to me.
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
Curious that she used the transmission+reconstruction module while committing “suicide”, innit? She didn’t have to—it was a deliberate choice.
The brain constructed in your likeness is only normatively related to your brain. That’s the point I’m making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It’s a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a set of rules.
This is obvious when you consider how the procedure might be developed. We would have to have a great many trial runs and would decide when we had got it right. That decision would be based on a set of normative criteria, a set of measurements. So it would only be “successful” according to a set of human norms. The procedure would be a cultural practice rather than a physical process. But there is just no such thing as something physical being “converted” or “transformed” into a description (or information or a pattern or representation) - because these are all normative concepts—so such a step cannot possibly conserve identity.
As I said, the only way the person in cryonic suspension can continue to live is through a standard process of revival—that is, one that doesn’t involve the step of being described and then having a likeness created—and if such a revival doesn’t occur, the person is dead. This is because the process of being described and then having a likeness created isn’t any sort of revival at all and couldn’t possibly be. It’s a logical impossibility.
My response to this is very simple, but it’s necessary to know beforehand that the brain’s operation is robust to many low-level variations, e.g., thermal noise that triggers occasional random action potentials at a low rate.
We would have to have a great many trial runs and would decide when we had got it right.
Suppose our standard is that we get it right when the reconstructed brain is more like the original brain just before cryonic preservation than a brain after a good night’s sleep is like that same brain before sleeping—within the subset of brain features that are not robust to variation. Further suppose that that standard is achieved through a process that involves a representation of the structure of the brain. Albeit that the representation is indeed a “cultural practice”, the brute fact of the extreme degree of similarity of the pre- and post-process brains would seem much more relevant to the question of preservation of any aspect of the brain worthy of being called “identity”.
ETA: Thinking about this a bit more, I see that the notion of “similarity” in the above argument is also vulnerable to the charge of being a mere cultural practice. So let me clarify that the kind of similarity I have in mind basically maps to reproducibility of the input-output relation of a low-level functional unit, up to, say, the magnitude of thermal noise. Reproducibility in this sense has empirical content; it is not merely culturally constructed.
I don’t see how using more detailed measurements makes it any less a cultural practice. There isn’t a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of “identity” according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).
I don’t see how using more detailed measurements makes it any less a cultural practice.
I’m not saying it’s not a cultural practice. I’m saying that the brute fact of the extreme degree of similarity (and resulting reproducibility of functionality) of the pre- and post-process brains seems like a much more relevant fact. I don’t know why I should care that the process is a cultural artifact if the pre- and post-process brains are so similar that for all possible inputs, they produce the same outputs. That I can get more brains out than I put in is a feature, not a bug, even though it makes the concept of a singular identity obsolete.
It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I don’t know what the word “clear” in that sentence actually means.
If you’re simply asserting that what has occurred in this example is your death, then no, it isn’t clear, any more than if I assert that I actually died 25 minutes ago, that’s clear evidence that Internet commenting after death is possible.
I’m not saying you’re necessarily wrong… I mean, sure, it’s possible that you’re correct, and in your hypothetical scenario you actually are dead, despite the continued existence of something that acts like you and believes itself to be you. It’s also possible that in my hypothetical scenario I’m correct and I really did die 25 minutes ago, despite the continued existence of something that acts like me and believes itself to be me.
I’m just saying it isn’t clear… in other words, that it’s also possible that one or both of us is confused/mistaken about what it means for us to die and/or remain alive.
In the example being discussed we have a body. I can’t think of a clearer example of death than one where you can point to the corpse or remains. You couldn’t assert that you died 25 minutes ago—since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued to post on the Internet, that would be evidence that you hadn’t died. Although the explanation that someone just like you was continuing to post on the Internet would be consistent with your having died.
Now, if I understand the “two particles of the same type are identical” argument in the context of uploading/copying, it shouldn’t be relevant because two huge multi-particle configurations are not going to be identical. You cannot measure the state of each particle in the original and you cannot precisely force each particle in the copy into that state. And no amount of similar is enough, the two of you have to be identical in the sense that two electrons are identical if we’re talking about being Feynman paths that your amplitude would be summed over. And that rules out digital simulations altogether.
But I didn’t really expect any patternists to defend the first way you could be right in my post. Whereas, the second way you might be right amounts to, by my definition, proving to me that I am already dead or that I die all the time. If that’s the case, all bets are off, everything I care about is due for a major reassessment.
I’d still want toknow the truth of course. But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth. Only a plausible for which (or against which) I have not yet seen much evidence.
But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth.
Can you taboo “level of death” for me? Also, what sorts of experiences would count as evidence for or against the proposition?
Discontinuity. Interruption of inner narrative. You know how the last thing you remember was puking over the toilet bowl and then you wake up on the bathroom floor and it’s noon? Well, that but minus everything that goes after the word “bowl”.
Or the technical angle—whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Darn it. I asked two questions—sorry, my mistake—and I find I can’t unequivocally assign your response to one question or the other (or different parts of your response to both).
I guess this would be my attempt to answer your first question: articulating what I meant without the phrase “level of death”.
My answer to your second question it tougher. Somewhat compelling evidence that whatever I value has been preserved would be simultaneously experiencing life from the point of view of two different instances. This could be accomplished perhaps through frequent or continuous synchronization of the memories and thoughts of the two brains. Another convincing experience (though less so) would be gradual replacement of individual biological components that would have otherwise died, with time for the replacement parts to be assimilated into the existing system of original and earlier-replaced components.
If I abruptly woke up in a new body with all my old memories, I would be nearly certain that the old me has experienced death if they are not around, or if they are still around (without any link to each others’ thoughts), that I am the only one who has tangibly benefited from whatever the rejuvenating/stabilizing effects of the replication/uploading might be, and they have not. If I awoke from cryostasis in my old body (or head, as the case may be) even then I would only ever be 50% sure that the individual entering cryostasis is not experiencing waking up (unless there was independent evidence of weak activity in my brain during cryostasis).
The way for me to be convinced, not that continuity has been preserved but rather that my desire for continuity is impossible, does double duty with my answer to the first question:
whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Actually, let’s start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let’s say your problem is that you have a fatal illness. You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that’s a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that’s different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone’s opinion about whether that being is me, that’s an outcome I desire, and I can’t actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don’t have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking—someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are “close enough” and tell them it was for their own good.
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of “you”)?
Probably (iii) is the closest to the truth, but without euthenasia. I’d just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it’s filled with suffering. I don’t care. I want to live. Forever if possible, for an extra minute if that’s all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
what I care about is the continuation of my inner narrative for as long as possible
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there’s only a single instance of “me”—the only extant copy of my particular values and abilities—then its persistence cannot be immaterial to all the other things I care about, and that’s why I currently care about my persistence more-or-less unconditionally. If there’s more than one copy of “me” kicking around, then “more-or-less unconditionally” no longer applies. My own internal narrative doesn’t enter into the question, and I’m confused as to why anyone else would give their own internal narrative any consideration.
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
Maybe the same why as why do some people care more about their families than about other people’s families. Why some people care more about themselves than about strangers. What I can’t grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like “patternists could be wrong” refer to an orthogonal issue.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I’m right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
the, in principle, falsifiable assertion that if I opt for plastination that I will wake up in the future with an equal or greater probability than if I opt for cryonics
I’m not sure what you mean here. Probability statements aren’t falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
My assertion is that I will die and be replaced by someone hard or impossible to distinguish from me.
At the degree of resolution we’re talking about, talking about you/not-you at all seems like a blegg/rube distinction. It’s just not a useful way of thinking about what’s being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
So, I mean, the utility function is not up for grabs.
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
I haven’t thought really deeply about that, but it seems to me that if Egan’s Law doesn’t offer you some measure of protection and also a way to cope with failures of your map, you’re probably doing it wrong.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
My vague notion is that if your goals don’t have ramifications in the realm of the normal, you’re doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your “map” here is not one fixed notion about the way the world works. It’s a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you’re not sure whether “patternists” (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.
You seem to think that creating a description of the structure of a brain is necessarily a destructive process. I don’t know of any reason to assume that. If a non-destructive scan exists and is carried out, then there’s no “death”, howsoever defined. Right?
But anyway, let’s grant your implicit assumption of a destructive scan, and suppose that this process has actually occurred to your brain, and “something that functions like [your] brain” has been created. Who is the resulting being? Who do they think they are? What do they do next? Do they do the sorts of things you would do? Love the people you love?
I grant that you do not consider this hypothetical being you—after all, you are hypothetically dead. But surely there is no one else better qualified to answer these questions, so it’s you that I ask.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.
Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things. Convincing it that it has the same identity as a dead person is just one among many strange tricks you could play on it.
Fair enough.
I’m positing that the being has been informed about how it was created; it knows that it is not the being it remembers, um, being. So it has the knowledge to say of itself, if it were so inclined, “I am a being purposefully constructed ab initio with all of the memories and cognitive capacities of scientism, RIP.”
Would it be so inclined? If so, what would it do next? (Let us posit that it’s a reconstructed embodied human being.) For example, would it call up your friends and introduce itself? Court your former spouse (if you have one), fully acknowledging that it is not the original you? Ask to adopt your children (if you have any)?
It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.
I suppose, to put myself in that situation, I would, willpower permitting, have the false memories removed (if possible), adopt a different name and perhaps change my appearance (or at least move far away). But I see the situation as unimaginably cruel. You’re creating a being—presumably a thinking, feeling being—and tricking it into thinking it did certain things in the past, etc, that it did not do. Even if it knows that it was created, that still seems like a terrible situation to be in, since it’s essentially a form of (inflicted) mental illness.
!!… I hope you mean explicit memory but not implicit memory—otherwise there wouldn’t be much of a being left afterwards...
For a certain usage of “tricking” this is true, but that usage is akin to the way optical illusions trick one’s visual system rather than denoting a falsehood deliberately embedded in one’s explicit knowledge.
I would point out that the source of all the hypothetical suffering in this situation would the being’s (and your) theory of identity rather than the fact of anyone’s identity (or lack thereof). If this isn’t obvious, just posit that the scenario is conceivable but hasn’t actually happened, and some bastard deceives you into thinking it has—or even just casts doubt on the issue in either case.
Of course that doesn’t mean the theory is false—but I do want to say that from my perspective it appears that the emotional distress would come from reifying a naïve notion of personal identity. Even the word “identity”, with its connotations of singleness, stops being a good one in the hypothetical.
Have you seen John Weldon’s animated short To Be? You might enjoy it. If you watch it, I have a question for you: would you exculpate the singer of the last song?
I take it that my death and the being’s ab initio creation are both facts. These aren’t theoretical claims. The claim that I am “really” a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn’t amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being’s behaviour, its ability to fool others, and its own confused state doesn’t make any difference to the argument. It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I would also point out that there are people who are quite content with severe mental illness. You might have delusions of being Napoleon and be quite happy about it. Perhaps such a person would argue that “I feel like Napoleon and that’s good enough for me!”
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
In the hypothetical, your brain has stopped functioning. Whether this is sufficient to affirm that you died is precisely the question at issue. Personally, it doesn’t matter to me if my brain’s current structure is the product of biological mechanisms operating continuously by physical law or is the product of, say, a 3D printer and a cryonically-created template—also operating by physical law. Both brains are causally related to my past self in enough detail to make the resulting brain me in every way that matters to me.
Curious that she used the transmission+reconstruction module while committing “suicide”, innit? She didn’t have to—it was a deliberate choice.
The brain constructed in your likeness is only normatively related to your brain. That’s the point I’m making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It’s a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a set of rules.
This is obvious when you consider how the procedure might be developed. We would have to have a great many trial runs and would decide when we had got it right. That decision would be based on a set of normative criteria, a set of measurements. So it would only be “successful” according to a set of human norms. The procedure would be a cultural practice rather than a physical process. But there is just no such thing as something physical being “converted” or “transformed” into a description (or information or a pattern or representation) - because these are all normative concepts—so such a step cannot possibly conserve identity.
As I said, the only way the person in cryonic suspension can continue to live is through a standard process of revival—that is, one that doesn’t involve the step of being described and then having a likeness created—and if such a revival doesn’t occur, the person is dead. This is because the process of being described and then having a likeness created isn’t any sort of revival at all and couldn’t possibly be. It’s a logical impossibility.
My response to this is very simple, but it’s necessary to know beforehand that the brain’s operation is robust to many low-level variations, e.g., thermal noise that triggers occasional random action potentials at a low rate.
Suppose our standard is that we get it right when the reconstructed brain is more like the original brain just before cryonic preservation than a brain after a good night’s sleep is like that same brain before sleeping—within the subset of brain features that are not robust to variation. Further suppose that that standard is achieved through a process that involves a representation of the structure of the brain. Albeit that the representation is indeed a “cultural practice”, the brute fact of the extreme degree of similarity of the pre- and post-process brains would seem much more relevant to the question of preservation of any aspect of the brain worthy of being called “identity”.
ETA: Thinking about this a bit more, I see that the notion of “similarity” in the above argument is also vulnerable to the charge of being a mere cultural practice. So let me clarify that the kind of similarity I have in mind basically maps to reproducibility of the input-output relation of a low-level functional unit, up to, say, the magnitude of thermal noise. Reproducibility in this sense has empirical content; it is not merely culturally constructed.
I don’t see how using more detailed measurements makes it any less a cultural practice. There isn’t a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of “identity” according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).
I’m not saying it’s not a cultural practice. I’m saying that the brute fact of the extreme degree of similarity (and resulting reproducibility of functionality) of the pre- and post-process brains seems like a much more relevant fact. I don’t know why I should care that the process is a cultural artifact if the pre- and post-process brains are so similar that for all possible inputs, they produce the same outputs. That I can get more brains out than I put in is a feature, not a bug, even though it makes the concept of a singular identity obsolete.
I don’t know what the word “clear” in that sentence actually means.
If you’re simply asserting that what has occurred in this example is your death, then no, it isn’t clear, any more than if I assert that I actually died 25 minutes ago, that’s clear evidence that Internet commenting after death is possible.
I’m not saying you’re necessarily wrong… I mean, sure, it’s possible that you’re correct, and in your hypothetical scenario you actually are dead, despite the continued existence of something that acts like you and believes itself to be you. It’s also possible that in my hypothetical scenario I’m correct and I really did die 25 minutes ago, despite the continued existence of something that acts like me and believes itself to be me.
I’m just saying it isn’t clear… in other words, that it’s also possible that one or both of us is confused/mistaken about what it means for us to die and/or remain alive.
In the example being discussed we have a body. I can’t think of a clearer example of death than one where you can point to the corpse or remains. You couldn’t assert that you died 25 minutes ago—since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued to post on the Internet, that would be evidence that you hadn’t died. Although the explanation that someone just like you was continuing to post on the Internet would be consistent with your having died.
OK, I think I understand what you mean by “clear” now. Thanks.
Now, if I understand the “two particles of the same type are identical” argument in the context of uploading/copying, it shouldn’t be relevant because two huge multi-particle configurations are not going to be identical. You cannot measure the state of each particle in the original and you cannot precisely force each particle in the copy into that state. And no amount of similar is enough, the two of you have to be identical in the sense that two electrons are identical if we’re talking about being Feynman paths that your amplitude would be summed over. And that rules out digital simulations altogether.
But I didn’t really expect any patternists to defend the first way you could be right in my post. Whereas, the second way you might be right amounts to, by my definition, proving to me that I am already dead or that I die all the time. If that’s the case, all bets are off, everything I care about is due for a major reassessment.
I’d still want to know the truth of course. But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth. Only a plausible for which (or against which) I have not yet seen much evidence.
Can you taboo “level of death” for me? Also, what sorts of experiences would count as evidence for or against the proposition?
Discontinuity. Interruption of inner narrative. You know how the last thing you remember was puking over the toilet bowl and then you wake up on the bathroom floor and it’s noon? Well, that but minus everything that goes after the word “bowl”.
Or the technical angle—whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Darn it. I asked two questions—sorry, my mistake—and I find I can’t unequivocally assign your response to one question or the other (or different parts of your response to both).
I guess this would be my attempt to answer your first question: articulating what I meant without the phrase “level of death”.
My answer to your second question it tougher. Somewhat compelling evidence that whatever I value has been preserved would be simultaneously experiencing life from the point of view of two different instances. This could be accomplished perhaps through frequent or continuous synchronization of the memories and thoughts of the two brains. Another convincing experience (though less so) would be gradual replacement of individual biological components that would have otherwise died, with time for the replacement parts to be assimilated into the existing system of original and earlier-replaced components.
If I abruptly woke up in a new body with all my old memories, I would be nearly certain that the old me has experienced death if they are not around, or if they are still around (without any link to each others’ thoughts), that I am the only one who has tangibly benefited from whatever the rejuvenating/stabilizing effects of the replication/uploading might be, and they have not. If I awoke from cryostasis in my old body (or head, as the case may be) even then I would only ever be 50% sure that the individual entering cryostasis is not experiencing waking up (unless there was independent evidence of weak activity in my brain during cryostasis).
The way for me to be convinced, not that continuity has been preserved but rather that my desire for continuity is impossible, does double duty with my answer to the first question:
[Unambiguous, de-mystifying neurological characterization of...]
Actually, let’s start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let’s say your problem is that you have a fatal illness. You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that’s a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that’s different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone’s opinion about whether that being is me, that’s an outcome I desire, and I can’t actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
...maybe you don’t have kids?
Oh, I do, and a spouse.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don’t have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking—someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are “close enough” and tell them it was for their own good.
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of “you”)?
Probably (iii) is the closest to the truth, but without euthenasia. I’d just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it’s filled with suffering. I don’t care. I want to live. Forever if possible, for an extra minute if that’s all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there’s only a single instance of “me”—the only extant copy of my particular values and abilities—then its persistence cannot be immaterial to all the other things I care about, and that’s why I currently care about my persistence more-or-less unconditionally. If there’s more than one copy of “me” kicking around, then “more-or-less unconditionally” no longer applies. My own internal narrative doesn’t enter into the question, and I’m confused as to why anyone else would give their own internal narrative any consideration.
ETA: So, I mean, the utility function is not up for grabs. If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like “patternists could be wrong” refer to an orthogonal issue.
Maybe the same why as why do some people care more about their families than about other people’s families. Why some people care more about themselves than about strangers. What I can’t grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
What, kin selection? Okay, let me think through the implications...
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
You said,
And why do people generally care more about their families than about other people’s families? Kin selection.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I’m right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
I’m not sure what you mean here. Probability statements aren’t falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
At the degree of resolution we’re talking about, talking about you/not-you at all seems like a blegg/rube distinction. It’s just not a useful way of thinking about what’s being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Oops, you’re right. I have now revised it.
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
I haven’t thought really deeply about that, but it seems to me that if Egan’s Law doesn’t offer you some measure of protection and also a way to cope with failures of your map, you’re probably doing it wrong.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
My vague notion is that if your goals don’t have ramifications in the realm of the normal, you’re doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your “map” here is not one fixed notion about the way the world works. It’s a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you’re not sure whether “patternists” (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.