Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
However, if I think too much about light waves and computers, I’m reminded of the LED cellular-automaton computationalist thought experiment and start to have nagging doubts about computer consciousness.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.