I wish I could give you more up-votes for explicitly making existential catastrophe part of your calculations, too many people focus on the technical considerations to the exclusion of other relevant unknowns.
Here are mine (explanation of edit—oops, patternists being wrong and right didn’t sum to 1, fixed now):
Cryo
(50% not being ended by an intervening existential catastrophe) x (80% fellow humans come through) x [(70% patternists are wrong) x (50% cryo sufficient to preserve whatever it is I call my continuity strictly by repairing my original instance) + (30% patternists are right) x (70% cryo sufficient to preserve whatever it is I call my continuity by any means)]
= 22.4%
Plastination
(70% not being ended by an intervening existential catastrophe) x (85% fellow humans will through) x [(70% patternists are wrong) x (0% plastination sufficient to preserve whatever it is I call my continuity by any means) + (30% patternists are right) x (90% plastination sufficient to preserve whatever it is I call my continuity by any means)]
= 16.065%
Explanations: among existential risks I count human-madeproblems that would prevent people who would revive me from doing so. So, fellow humans coming through simply means there is someone at any point in the future willing and able to revive me conditioned on no existential catastrophes and it being technically possible to revive me at all.
For patternists to be right, both the following would have to be true...
A sufficiently accurate representation of you is you (to the point that your amplitude would sum over all such representations that you coexist with).
It is theoretically possible to achieve such a representation by uploading or reconstruction with molecular precision or some other reconstruction or simulation technique.
… or the following has to be true:
My sense of continuing inner narrative is an illusion that will be destroyed by a more accurate understanding of reality, and amounts to wanting something that is a empty set, a dangling object, like meeting the real Mickey Mouse or knowing what prime numbers taste like.
It’s unlikely that patternists can be right in the first way because there is no evidence that very-very-similar configurations can validly be substituted for identical configurations. Especially if the replica in question did not evolve from the original and even more so if the replica is running on a completely different substrate than the original. Even if they are right in that way, will it ever be possible to test experimentally?
It is far more likely that patternists are right in the second way. My terminal goal is continuity (and I value accurate preservation of the data in my brain only because I think it is instrumental to preserving continuity). The main reason I am signed up for cryonics is because I believe it is more likely to succeed than any currently available alternative. If all alternatives are fundamentally doomed to failure, the entire question becomes largely moot. I’m therefore conditioning my above probabilities on the patternists not being right in the second way.
If continuing inner narrative is an illusion, and you find yourself in a simulation, then you could very well realize your dream of meeting a “the real Mickey Mouse.” If we can be accurately simulated, why not living cartoons as well? A lot becomes possible by getting rid of identity-related illusions.
If I’m already in a simulation, that’s a different story. A fait accompli. At the very least I’ll have an empirical answer to the burning question of whether it’s possible for me to exist within the simulation, though I’ll still have no way of knowing whether I am really the same person as whoever I am simulating.
But until I find a glitch that makes objects disappear, render in wireframe, etc. I have no reason to give simulation arguments all that much more credence than heaven and hell.
I wish I could give you more up-votes for explicitly making existential catastrophe part of your calculations, too many people focus on the technical considerations to the exclusion of other relevant unknowns.
Here are mine (explanation of edit—oops, patternists being wrong and right didn’t sum to 1, fixed now):
Cryo
(50% not being ended by an intervening existential catastrophe) x
(80% fellow humans come through) x
[(70% patternists are wrong) x (50% cryo sufficient to preserve whatever it is I call my continuity strictly by repairing my original instance) +
(30% patternists are right) x (70% cryo sufficient to preserve whatever it is I call my continuity by any means)]
= 22.4%
Plastination
(70% not being ended by an intervening existential catastrophe) x
(85% fellow humans will through) x
[(70% patternists are wrong) x (0% plastination sufficient to preserve whatever it is I call my continuity by any means) +
(30% patternists are right) x (90% plastination sufficient to preserve whatever it is I call my continuity by any means)]
= 16.065%
Explanations: among existential risks I count human-made problems that would prevent people who would revive me from doing so. So, fellow humans coming through simply means there is someone at any point in the future willing and able to revive me conditioned on no existential catastrophes and it being technically possible to revive me at all.
For patternists to be right, both the following would have to be true...
A sufficiently accurate representation of you is you (to the point that your amplitude would sum over all such representations that you coexist with).
It is theoretically possible to achieve such a representation by uploading or reconstruction with molecular precision or some other reconstruction or simulation technique.
… or the following has to be true:
My sense of continuing inner narrative is an illusion that will be destroyed by a more accurate understanding of reality, and amounts to wanting something that is a empty set, a dangling object, like meeting the real Mickey Mouse or knowing what prime numbers taste like.
It’s unlikely that patternists can be right in the first way because there is no evidence that very-very-similar configurations can validly be substituted for identical configurations. Especially if the replica in question did not evolve from the original and even more so if the replica is running on a completely different substrate than the original. Even if they are right in that way, will it ever be possible to test experimentally?
It is far more likely that patternists are right in the second way. My terminal goal is continuity (and I value accurate preservation of the data in my brain only because I think it is instrumental to preserving continuity). The main reason I am signed up for cryonics is because I believe it is more likely to succeed than any currently available alternative. If all alternatives are fundamentally doomed to failure, the entire question becomes largely moot. I’m therefore conditioning my above probabilities on the patternists not being right in the second way.
If continuing inner narrative is an illusion, and you find yourself in a simulation, then you could very well realize your dream of meeting a “the real Mickey Mouse.” If we can be accurately simulated, why not living cartoons as well? A lot becomes possible by getting rid of identity-related illusions.
If I’m already in a simulation, that’s a different story. A fait accompli. At the very least I’ll have an empirical answer to the burning question of whether it’s possible for me to exist within the simulation, though I’ll still have no way of knowing whether I am really the same person as whoever I am simulating.
But until I find a glitch that makes objects disappear, render in wireframe, etc. I have no reason to give simulation arguments all that much more credence than heaven and hell.
I was assuming the shattering of the illusion of continuing inner narrative made the question of “really the same” irrelevant/nonsensical.