Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele’s work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn’t want to “die” every night when the lights go off? What would it do then?
OK, in a hypothetical world where somehow pausing a conscious computation—maintaining all data such that it could be restarted losslessly—is murder, those are concerns. Agreed. I’m not arguing against that.
My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I’m looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that’s most of why we care about murder and what makes it a moral issue.)
As Eliezer mentioned in his reply (in different words), if power cycling is death, what’s the shortest suspension time that isn’t? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.
Edit: it’s actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.
What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.
But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).
In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise.
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn’t any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?
This does equal death-of-identity, and is similar to the transporter thought experiment.
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related?...
No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?
Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I’ll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.
I don’t want to, I just think that the alternatives lead to absurd outcomes that can’t possibly be correct (see my analysis of the teleporter scenario).
I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).
Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele’s work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn’t want to “die” every night when the lights go off? What would it do then?
OK, in a hypothetical world where somehow pausing a conscious computation—maintaining all data such that it could be restarted losslessly—is murder, those are concerns. Agreed. I’m not arguing against that.
My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I’m looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that’s most of why we care about murder and what makes it a moral issue.)
As Eliezer mentioned in his reply (in different words), if power cycling is death, what’s the shortest suspension time that isn’t? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.
Edit: it’s actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.
What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.
But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn’t any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.
When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?
Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I’ll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.
I don’t want to, I just think that the alternatives lead to absurd outcomes that can’t possibly be correct (see my analysis of the teleporter scenario).
I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).