You wrote this LessWrong post about cryonics being a good idea under the assumption that your readers would disagree with an argument from the core sequences which is usually used to support the “cryonics is a good idea” conclusion on LessWrong? To each his own.
Here are the real/hypothetical cases that mostly formed my answer to your last question:
If you were to replace every neuron in your brain with a robotic cell exactly simulating its function, one neuron at a time and timed such that your cognition is totally unaffected during the process, would this cause you any doubts about your identity?
Why doesn’t the interruption in your conscious experience caused by going to sleep make you think you’re “a different person” in any sense once you wake up, keeping in mind that a continuous identity couldn’t possibly have anything to do with being made of the same stuff? What about when people are rendered temporarily unconscious by physical trauma, drugs, or other things that the brain don’t have as much control over as sleeping?
Does this mean that I should not fear death, because since I can in principle be exactly reproduced, it is not fundamentally different from sleep? In a classical sense, it is this body that I actually care about preserving, not my pattern of consciousness—that’s where the fear of death is coming from. And deeper, it is really my body that cares about preserving my body—not my consciousness pattern. So the problem that I am having trouble wrapping my head around is that statistics alone makes recreation of my pattern of consciousness likely; cryonics doesn’t really add much more likelihood to it, in my opinion. At whatever point in the future that I am recreated by mere chance or simulation, that will be the next time “I” exist, whether it’s a billion years from now, on another planet, or another universe. Neither does it stop me from dying, so what is the actual point of cryonics, since it seems to not satisfy either of its purposes?
Preserving that information makes it much more likely you’ll be reproduced accurately and in a timely manner and in a situation you would be able to enjoy, rather than in twenty quintillion years because of quantum noise or some such. Part of the point of preserving your state until it can be transferred to a more durable artifact is that there’s some chain of causal events between who you were when your state was recorded, and who “you” are when that state is hopefully resumed; many people seem to value that quite a bit. You should try to avoid death regardless of your beliefs about cryonics, identity, or just about anything else.
That’s a helpful, honest answer, thanks. I have a lot of empathy, but basically no sympathy in my programming. Unfortunately this extends even to my regard for my future selves. I try to avoid death in the moment and the near future, I don’t seem even to identify with my future self. So hearing something like “Well, most other people would want so and so, now you know,” at least helps me understand humans.
You wrote this LessWrong post about cryonics being a good idea under the assumption that your readers would disagree with an argument from the core sequences which is usually used to support the “cryonics is a good idea” conclusion on LessWrong? To each his own.
Here are the real/hypothetical cases that mostly formed my answer to your last question:
If you were to replace every neuron in your brain with a robotic cell exactly simulating its function, one neuron at a time and timed such that your cognition is totally unaffected during the process, would this cause you any doubts about your identity?
Why doesn’t the interruption in your conscious experience caused by going to sleep make you think you’re “a different person” in any sense once you wake up, keeping in mind that a continuous identity couldn’t possibly have anything to do with being made of the same stuff? What about when people are rendered temporarily unconscious by physical trauma, drugs, or other things that the brain don’t have as much control over as sleeping?
Does this mean that I should not fear death, because since I can in principle be exactly reproduced, it is not fundamentally different from sleep? In a classical sense, it is this body that I actually care about preserving, not my pattern of consciousness—that’s where the fear of death is coming from. And deeper, it is really my body that cares about preserving my body—not my consciousness pattern. So the problem that I am having trouble wrapping my head around is that statistics alone makes recreation of my pattern of consciousness likely; cryonics doesn’t really add much more likelihood to it, in my opinion. At whatever point in the future that I am recreated by mere chance or simulation, that will be the next time “I” exist, whether it’s a billion years from now, on another planet, or another universe. Neither does it stop me from dying, so what is the actual point of cryonics, since it seems to not satisfy either of its purposes?
Preserving that information makes it much more likely you’ll be reproduced accurately and in a timely manner and in a situation you would be able to enjoy, rather than in twenty quintillion years because of quantum noise or some such. Part of the point of preserving your state until it can be transferred to a more durable artifact is that there’s some chain of causal events between who you were when your state was recorded, and who “you” are when that state is hopefully resumed; many people seem to value that quite a bit. You should try to avoid death regardless of your beliefs about cryonics, identity, or just about anything else.
That’s a helpful, honest answer, thanks. I have a lot of empathy, but basically no sympathy in my programming. Unfortunately this extends even to my regard for my future selves. I try to avoid death in the moment and the near future, I don’t seem even to identify with my future self. So hearing something like “Well, most other people would want so and so, now you know,” at least helps me understand humans.