Instead, let’s say I froze my brain. My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there.
How is sleep, unconsciousness, deep anesthesia any different, though?
But further, why is continuity important? If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
How is sleep, unconsciousness, deep anesthesia any different, though?...But further, why is continuity important?
Those two questions are two sides of the same coin to me. Those examples preserve continuity in the form of synapses and other neural connections. In none of those cases does the brain actually stop running, just the consciousness program. You can’t just pull out someone’s heart while they’re anesthetized—if the brain’s cells die from lack of fuel, you’re destroying the hardware that the consciousness program needs to reboot from.
If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
...cryonics is probably unnecessary if I can instead do a molecule-level brain-image upload before death (assuming that turns out to be possible). But if that’s so, don’t we also need to reject the idea of a personal future?
When you upload your brain-image, please make the most of your life after that, because it would be the same as with the computer. You will die in fear and loneliness, and your copy will wake up convinced he is you. (That would make a great fortune-cookie message!) In both cryonic preservation, and brain upload, the original quantum system which is you is being shut down—no splitting realities are involved here (except the usual ones)-- you are going to experience death, and it was my understanding that the point of cryonics and mind transfer was to avoid experiencing death. (By “experience death”, I mean that your mind-pattern ceases to function.) Anyone deriving comfort from those two methods should seriously consider this concrete downside to them.
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.
How is sleep, unconsciousness, deep anesthesia any different, though?
But further, why is continuity important? If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
In any case, I don’t have any answers. Eliezer’s Identity Isn’t In Specific Atoms for me seems to suggest cryonics is probably unnecessary if I can instead do a molecule-level brain-image upload before death (assuming that turns out to be possible). But if that’s so, don’t we also need to reject the idea of a personal future?
Those two questions are two sides of the same coin to me. Those examples preserve continuity in the form of synapses and other neural connections. In none of those cases does the brain actually stop running, just the consciousness program. You can’t just pull out someone’s heart while they’re anesthetized—if the brain’s cells die from lack of fuel, you’re destroying the hardware that the consciousness program needs to reboot from.
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
When you upload your brain-image, please make the most of your life after that, because it would be the same as with the computer. You will die in fear and loneliness, and your copy will wake up convinced he is you. (That would make a great fortune-cookie message!) In both cryonic preservation, and brain upload, the original quantum system which is you is being shut down—no splitting realities are involved here (except the usual ones)-- you are going to experience death, and it was my understanding that the point of cryonics and mind transfer was to avoid experiencing death. (By “experience death”, I mean that your mind-pattern ceases to function.) Anyone deriving comfort from those two methods should seriously consider this concrete downside to them.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
However, if I think too much about light waves and computers, I’m reminded of the LED cellular-automaton computationalist thought experiment and start to have nagging doubts about computer consciousness.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.