Presumably you’re referencing the notion of quantum immortality. If QI is a possibly real effect, then the robot’s repeated survival counts as evidence for both the false-positive situation and for QI. For plausible priors it probably makes no difference, because in this story the robot’s survival and humanity’s survival are linked, and if QI applies to the robot then it applies to humanity, too.
Remember that if humanity only survives due to QI, then almost all of humanity’s total measure does not survive. We can’t observe the “universes” in which humanity is dead, but that doesn’t mean we shouldn’t care about them.
I care because that measure contains human beings, and I care about human beings, even ones I can’t observe.
I can’t tell you what to care about, but I repeat that “we can’t observe that measure” is not (as far as I’m concerned) a reason to jump to “I don’t care about the human beings who make up that measure”.
(I suspect there’s a sequence post that covers this better than I can, but I don’t know it offhand.)
You should care for the same reason you don’t want to die—not because being dead is so bad, but because you’d rather be living.
If you play russian roulette with a quantum coin, “you” don’t move into a different world if the gun fires. There is no immortal soul that occupies quantum states like a hermit crab, scuttling off into a different sized shell when necessary. If you don’t like dying, you won’t like getting shot.
It’s possible to construct an agent that rationally chooses quantum suicide. But this is inconsistent with not wanting to die.
This isn’t about quantum immortality, because frankly we don’t give a damn about the robot’s internal subjective experience, only about whether how we’ll program it so that it’ll maximize our own expected benefit.
“Quantum immortality” is about the supposed persistence of an internal subjective experience.
Suppose the reality of “quantum immortality” effects is one hypothesis that the robot considers. The probability of observing surviving 10 jumps given the noQI hypothesis is approximately 33%, since the robot could have failed to observe anything. But given the QI hypothesis, the probability of observing surviving 10 jumps is virtually 100%, since some version would survive to observe itself under all three engine-failure conditions. If QI had a prior P(QI)=X before, after the 10 jumps it has a posterior of 3X/(1+2X). So it seems clear that QI is relevant to whether the anthropic evidence can be usefully employed and that the anthropic evidence is relevant to evaluation of QI. Furthermore, it becomes relevant to the robot’s decision whether you value merely the fact the person is alive (as I initially assumed) or instead the total measure of the person that is alive (as philh argued).
There is no requirement to interpret QI in the silly way you and Manfred characterized it. An objective version is simply that some of the total measure of a person will survive.
Quantum Immortality is the idea that you personally will never experience death, because somehow those version of you that die “can’t experience anything” and so don’t count—it’s an idea that can only be believed by those people who have a confused mystical view of Death as a single event on a physical level, instead of the termination of a trillion different individual processes in the brain.
The example that this article provides can on the other hand be simulated (and thus answered on a decision theoretical level very specifically) programmatically by a simple process that undergoes “fork()” and is variably terminated or allowed to continue further.
As such it has nothing to do with the various confusions that the “quantum immortality” believers tend to entertain.
I don’t care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.
Here’s Wikipedia’s description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here’s a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.
Presumably you’re referencing the notion of quantum immortality. If QI is a possibly real effect, then the robot’s repeated survival counts as evidence for both the false-positive situation and for QI. For plausible priors it probably makes no difference, because in this story the robot’s survival and humanity’s survival are linked, and if QI applies to the robot then it applies to humanity, too.
Remember that if humanity only survives due to QI, then almost all of humanity’s total measure does not survive. We can’t observe the “universes” in which humanity is dead, but that doesn’t mean we shouldn’t care about them.
Why should I care about dead measure? Serious question.
I care because that measure contains human beings, and I care about human beings, even ones I can’t observe.
I can’t tell you what to care about, but I repeat that “we can’t observe that measure” is not (as far as I’m concerned) a reason to jump to “I don’t care about the human beings who make up that measure”.
(I suspect there’s a sequence post that covers this better than I can, but I don’t know it offhand.)
You should care for the same reason you don’t want to die—not because being dead is so bad, but because you’d rather be living.
If you play russian roulette with a quantum coin, “you” don’t move into a different world if the gun fires. There is no immortal soul that occupies quantum states like a hermit crab, scuttling off into a different sized shell when necessary. If you don’t like dying, you won’t like getting shot.
It’s possible to construct an agent that rationally chooses quantum suicide. But this is inconsistent with not wanting to die.
This isn’t about quantum immortality, because frankly we don’t give a damn about the robot’s internal subjective experience, only about whether how we’ll program it so that it’ll maximize our own expected benefit.
“Quantum immortality” is about the supposed persistence of an internal subjective experience.
Suppose the reality of “quantum immortality” effects is one hypothesis that the robot considers. The probability of observing surviving 10 jumps given the noQI hypothesis is approximately 33%, since the robot could have failed to observe anything. But given the QI hypothesis, the probability of observing surviving 10 jumps is virtually 100%, since some version would survive to observe itself under all three engine-failure conditions. If QI had a prior P(QI)=X before, after the 10 jumps it has a posterior of 3X/(1+2X). So it seems clear that QI is relevant to whether the anthropic evidence can be usefully employed and that the anthropic evidence is relevant to evaluation of QI. Furthermore, it becomes relevant to the robot’s decision whether you value merely the fact the person is alive (as I initially assumed) or instead the total measure of the person that is alive (as philh argued).
There is no requirement to interpret QI in the silly way you and Manfred characterized it. An objective version is simply that some of the total measure of a person will survive.
Quantum Immortality is the idea that you personally will never experience death, because somehow those version of you that die “can’t experience anything” and so don’t count—it’s an idea that can only be believed by those people who have a confused mystical view of Death as a single event on a physical level, instead of the termination of a trillion different individual processes in the brain.
The example that this article provides can on the other hand be simulated (and thus answered on a decision theoretical level very specifically) programmatically by a simple process that undergoes “fork()” and is variably terminated or allowed to continue further.
As such it has nothing to do with the various confusions that the “quantum immortality” believers tend to entertain.
I don’t care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.
Here’s Wikipedia’s description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here’s a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.