Actually, let’s start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let’s say your problem is that you have a fatal illness. You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that’s a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that’s different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone’s opinion about whether that being is me, that’s an outcome I desire, and I can’t actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don’t have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking—someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are “close enough” and tell them it was for their own good.
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of “you”)?
Probably (iii) is the closest to the truth, but without euthenasia. I’d just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it’s filled with suffering. I don’t care. I want to live. Forever if possible, for an extra minute if that’s all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
what I care about is the continuation of my inner narrative for as long as possible
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there’s only a single instance of “me”—the only extant copy of my particular values and abilities—then its persistence cannot be immaterial to all the other things I care about, and that’s why I currently care about my persistence more-or-less unconditionally. If there’s more than one copy of “me” kicking around, then “more-or-less unconditionally” no longer applies. My own internal narrative doesn’t enter into the question, and I’m confused as to why anyone else would give their own internal narrative any consideration.
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
Maybe the same why as why do some people care more about their families than about other people’s families. Why some people care more about themselves than about strangers. What I can’t grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like “patternists could be wrong” refer to an orthogonal issue.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I’m right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
the, in principle, falsifiable assertion that if I opt for plastination that I will wake up in the future with an equal or greater probability than if I opt for cryonics
I’m not sure what you mean here. Probability statements aren’t falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
My assertion is that I will die and be replaced by someone hard or impossible to distinguish from me.
At the degree of resolution we’re talking about, talking about you/not-you at all seems like a blegg/rube distinction. It’s just not a useful way of thinking about what’s being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
So, I mean, the utility function is not up for grabs.
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
I haven’t thought really deeply about that, but it seems to me that if Egan’s Law doesn’t offer you some measure of protection and also a way to cope with failures of your map, you’re probably doing it wrong.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
My vague notion is that if your goals don’t have ramifications in the realm of the normal, you’re doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your “map” here is not one fixed notion about the way the world works. It’s a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you’re not sure whether “patternists” (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.
Actually, let’s start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let’s say your problem is that you have a fatal illness. You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that’s a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that’s different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone’s opinion about whether that being is me, that’s an outcome I desire, and I can’t actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
...maybe you don’t have kids?
Oh, I do, and a spouse.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don’t have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking—someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are “close enough” and tell them it was for their own good.
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of “you”)?
Probably (iii) is the closest to the truth, but without euthenasia. I’d just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it’s filled with suffering. I don’t care. I want to live. Forever if possible, for an extra minute if that’s all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there’s only a single instance of “me”—the only extant copy of my particular values and abilities—then its persistence cannot be immaterial to all the other things I care about, and that’s why I currently care about my persistence more-or-less unconditionally. If there’s more than one copy of “me” kicking around, then “more-or-less unconditionally” no longer applies. My own internal narrative doesn’t enter into the question, and I’m confused as to why anyone else would give their own internal narrative any consideration.
ETA: So, I mean, the utility function is not up for grabs. If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like “patternists could be wrong” refer to an orthogonal issue.
Maybe the same why as why do some people care more about their families than about other people’s families. Why some people care more about themselves than about strangers. What I can’t grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
What, kin selection? Okay, let me think through the implications...
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
You said,
And why do people generally care more about their families than about other people’s families? Kin selection.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I’m right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
I’m not sure what you mean here. Probability statements aren’t falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
At the degree of resolution we’re talking about, talking about you/not-you at all seems like a blegg/rube distinction. It’s just not a useful way of thinking about what’s being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Oops, you’re right. I have now revised it.
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
I haven’t thought really deeply about that, but it seems to me that if Egan’s Law doesn’t offer you some measure of protection and also a way to cope with failures of your map, you’re probably doing it wrong.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
My vague notion is that if your goals don’t have ramifications in the realm of the normal, you’re doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your “map” here is not one fixed notion about the way the world works. It’s a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you’re not sure whether “patternists” (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.