1) Any Agent that reconstructs my mind from a plasticized or frozen brain is very smart and well-informed. It is working its way through a whole warehouse of similar 21st century brains, and can reconstruct vast swathes of my mind with generic any-human or any-human-who-grew-up-watching-Sesame-Street boilerplate. This gets boring after the first few hundred.
2) I’m of no practical use in the post-Singularity world, with my obsolete work skills and mismatching social and moral behavior.
3) Frozen-brain reconstruction starts late enough that nobody remains alive who knows and loves me personally.
In this scenario, I expect the compressed mind reconstructions are just stored in an archive for research/entertainment purposes. Why bother ever running the reconstruction long enough for it to subjectively “wake up”?
I think that we need to let go of the idea of immortality as a continuation of our present self. The most we can hope for is that far in the future, some hyper-intelligent Agent has our memories. And probably the memories of thousands of other dead people as well.
Cryonics is most like writing a really detailed autobiography for future people to read after we’re dead. This still seems worthwhile to me, but it’s not the same thing as there being a living Charlie Davies in the 23rd century.
Good point. This website is dedicated to such an outcome right?
If the future Agent fully revives dead people purely for selfish reasons, that might be worse than no revival at all.
Reconstructed 21st-C minds might be most valuable as stock non-player-characters in RPG games. Their afterlife might consist of endlessly driving a cab in a 3-block circle, occasionally interrupted when a PC hops in and says “follow that car!”, death in a fiery crash, followed by amnesia and reset.
Is anyone working on legal rights for sentient software?
I had this thought too: if it is likely that a reviving agent is a slaver, and given that slavery is worse than death, I think I may well prefer death to cryonics. But that’s a very non-trivial ‘if’. I suppose the whole point of the term ‘singularity’ is that we can’t usefully extrapolate beyond a certain point so as to predict the behavior of such agents.
Re: cryonics, assume the following:
1) Any Agent that reconstructs my mind from a plasticized or frozen brain is very smart and well-informed. It is working its way through a whole warehouse of similar 21st century brains, and can reconstruct vast swathes of my mind with generic any-human or any-human-who-grew-up-watching-Sesame-Street boilerplate. This gets boring after the first few hundred.
2) I’m of no practical use in the post-Singularity world, with my obsolete work skills and mismatching social and moral behavior.
3) Frozen-brain reconstruction starts late enough that nobody remains alive who knows and loves me personally.
In this scenario, I expect the compressed mind reconstructions are just stored in an archive for research/entertainment purposes. Why bother ever running the reconstruction long enough for it to subjectively “wake up”?
I think that we need to let go of the idea of immortality as a continuation of our present self. The most we can hope for is that far in the future, some hyper-intelligent Agent has our memories. And probably the memories of thousands of other dead people as well.
Cryonics is most like writing a really detailed autobiography for future people to read after we’re dead. This still seems worthwhile to me, but it’s not the same thing as there being a living Charlie Davies in the 23rd century.
I took the survey.
Because it’s the right thing to do?
Good point. This website is dedicated to such an outcome right?
If the future Agent fully revives dead people purely for selfish reasons, that might be worse than no revival at all.
Reconstructed 21st-C minds might be most valuable as stock non-player-characters in RPG games. Their afterlife might consist of endlessly driving a cab in a 3-block circle, occasionally interrupted when a PC hops in and says “follow that car!”, death in a fiery crash, followed by amnesia and reset.
Is anyone working on legal rights for sentient software?
One would think so. Unfortunately the majority of people here have a hard time even taking the concept of “the right thing to do” seriously.
I had this thought too: if it is likely that a reviving agent is a slaver, and given that slavery is worse than death, I think I may well prefer death to cryonics. But that’s a very non-trivial ‘if’. I suppose the whole point of the term ‘singularity’ is that we can’t usefully extrapolate beyond a certain point so as to predict the behavior of such agents.