Me too, but I think resurrection without a backup should be seriously considered given the possibility of superhuman AI. That is, a simulation based on modelling the behavioural patterns of the person copied, attempting to predict their reactions to a given stimulus. If there are enough records of the person and by the person plus their DNA, given sufficiently powerful AI, such a beta-level simulation might be sufficiently close so that only a powerful posthuman being could notice any difference compared to the original. I’m not sure if Reynolds was the first person to consider this, I doubt it, but I deem the term beta level simulation adequate.
Resurrection without a backup. As with ecosystem reconstruction, such “resurrections” are in fact clever simulations. If the available information is sufficiently detailed, the individual and even close friends and associations are unable to tell the difference. However, transapient informants say that any being of the same toposophic level as the “resurrector” can see marks that the new being is not at all like the old one. “Resurrections” of historical figures, or of persons who lived at the fringes of Terragen civilization and were not well recorded, are of very uneven quality. -- Orion’s Arm—Encyclopedia Galactica—Limits of Transapient Power
I think this should be considered conceivable, but not in the same realm of plausibility as cryonics working. If you rate cryonics chances of working low, this is much lower. If you rate its chances extremely high, this possibility might be in a more moderate range.
My favorite idea is to scan interstellar dust for reflected electromagnetic and gravitational data. Intuitively, I imagine this would lead first to resolving only massive objects like stars and planets, but with time and enough computation it could be refined into higher details.
It was my understanding that this is one of Kurzweil’s eventual goals: reconstructing his father from DNA, memories of people who knew him, and just general human stuff.
As we move farther away it becomes harder to self-identify. I have some difficulty self-identifying with an upload but there’s still a fair bit. I have a lot of trouble identifying with a beta copy. In fact, I imagine that an accurate beta version of me would spend a lot of time simply worrying about whether it is me. (Of course now putting that in writing means that out super friendly AI will likely only make a beta of me that does do that worrying is potentially counterproductive. There’s a weird issue here where the more the beta is like me the more it will worry about this. So a beta that was just like me but didn’t worry would certainly not be close enough in behavior for me to self-identify with it. So such worrying in a beta is evidence that I should make the self-identification with that beta.)
In fact, I imagine that an accurate beta version of me would spend a lot of time simply worrying about whether it is me.
You will get over it fast. Ever experienced a few hours of memory loss of found some old writing of yours that you can not recall? The person you are changes all the time. You will be happy to be there, and probably find lots of writing about how/why an upload you enough by other minds.
You will get over it fast. Ever experienced a few hours of memory loss of found some old writing of yours that you can not recall?
Yes. And this bothers me a bit. Sometimes I even worry (a teeny tiny bit) that I’ve somehow switched into a parallel universe where I’ve wrote slightly different things (or in some cases really dumb things). When I was younger I sometimes had borderline panic attacks about how if I didn’t remember some event how could I identify as that individual? I suspect that I’m a little less psychologically balanced about these sorts of issues than most people...
I am deeply frightened by the fact that most important developments in my live are accidents. But well, there is little use in being afraid.
You could try to figure out how much you actually change from time unit to time unit by journaling, or by tracking mental changes. Maybe you also find a technique that can be adapted to measure your discomfort and to try out ways to reduce it.
I externalize some of my brainpower into note-files, with some funny results.
One way to resolve the dissonance this produces is to quit identifying with yourself from one point in time to the next. Me-from-yesterday can be seen as a different self (though sharing most identity characteristics like values, goals, memories, etc.) from me-from-today.
That’s an interesting way of thinking about it. My take on it is the opposite. If an accurate copy of me was made after my death, I am pretty sure the copy wouldn’t care if it was me or not, just as I don’t care if I am as my past self wished me to be. If the copy was convinced it was me, there would be no problem. If it was convinced it wasn’t, than it wouldn’t think of my death as any more important than the deaths of everyone else throughout history.
You forgot the most optimistic of all:
I could do absolutely nothing, get cremated and the eventual Friendly AI will still be able to reanimate me, via time-travel or equivalent.
Before I saw your comment I made the same one.
Now I deleted mine and I’ll upvote yours.
Well, she forgot beta-level simulations too. The AI resurrecting you by interpreting recorded behavioral patterns and your DNA.
Is this a standard term? I’ve only seen it in Alastair Reynolds’s writing.
Me too, but I think resurrection without a backup should be seriously considered given the possibility of superhuman AI. That is, a simulation based on modelling the behavioural patterns of the person copied, attempting to predict their reactions to a given stimulus. If there are enough records of the person and by the person plus their DNA, given sufficiently powerful AI, such a beta-level simulation might be sufficiently close so that only a powerful posthuman being could notice any difference compared to the original. I’m not sure if Reynolds was the first person to consider this, I doubt it, but I deem the term beta level simulation adequate.
I think this should be considered conceivable, but not in the same realm of plausibility as cryonics working. If you rate cryonics chances of working low, this is much lower. If you rate its chances extremely high, this possibility might be in a more moderate range.
My favorite idea is to scan interstellar dust for reflected electromagnetic and gravitational data. Intuitively, I imagine this would lead first to resolving only massive objects like stars and planets, but with time and enough computation it could be refined into higher details.
This is also the approach they take on the TV show Caprica.
It was my understanding that this is one of Kurzweil’s eventual goals: reconstructing his father from DNA, memories of people who knew him, and just general human stuff.
As we move farther away it becomes harder to self-identify. I have some difficulty self-identifying with an upload but there’s still a fair bit. I have a lot of trouble identifying with a beta copy. In fact, I imagine that an accurate beta version of me would spend a lot of time simply worrying about whether it is me. (Of course now putting that in writing means that out super friendly AI will likely only make a beta of me that does do that worrying is potentially counterproductive. There’s a weird issue here where the more the beta is like me the more it will worry about this. So a beta that was just like me but didn’t worry would certainly not be close enough in behavior for me to self-identify with it. So such worrying in a beta is evidence that I should make the self-identification with that beta.)
You will get over it fast. Ever experienced a few hours of memory loss of found some old writing of yours that you can not recall? The person you are changes all the time. You will be happy to be there, and probably find lots of writing about how/why an upload you enough by other minds.
Yes. And this bothers me a bit. Sometimes I even worry (a teeny tiny bit) that I’ve somehow switched into a parallel universe where I’ve wrote slightly different things (or in some cases really dumb things). When I was younger I sometimes had borderline panic attacks about how if I didn’t remember some event how could I identify as that individual? I suspect that I’m a little less psychologically balanced about these sorts of issues than most people...
Most people maybe, but not most LWers I bet. I have had such attacks too...
I am deeply frightened by the fact that most important developments in my live are accidents. But well, there is little use in being afraid.
You could try to figure out how much you actually change from time unit to time unit by journaling, or by tracking mental changes. Maybe you also find a technique that can be adapted to measure your discomfort and to try out ways to reduce it.
I externalize some of my brainpower into note-files, with some funny results.
One way to resolve the dissonance this produces is to quit identifying with yourself from one point in time to the next. Me-from-yesterday can be seen as a different self (though sharing most identity characteristics like values, goals, memories, etc.) from me-from-today.
I dislike this concept, but that is what with. Identity breaks down, and personhood ends.
That’s an interesting way of thinking about it. My take on it is the opposite. If an accurate copy of me was made after my death, I am pretty sure the copy wouldn’t care if it was me or not, just as I don’t care if I am as my past self wished me to be. If the copy was convinced it was me, there would be no problem. If it was convinced it wasn’t, than it wouldn’t think of my death as any more important than the deaths of everyone else throughout history.