Suppose the reality of “quantum immortality” effects is one hypothesis that the robot considers. The probability of observing surviving 10 jumps given the noQI hypothesis is approximately 33%, since the robot could have failed to observe anything. But given the QI hypothesis, the probability of observing surviving 10 jumps is virtually 100%, since some version would survive to observe itself under all three engine-failure conditions. If QI had a prior P(QI)=X before, after the 10 jumps it has a posterior of 3X/(1+2X). So it seems clear that QI is relevant to whether the anthropic evidence can be usefully employed and that the anthropic evidence is relevant to evaluation of QI. Furthermore, it becomes relevant to the robot’s decision whether you value merely the fact the person is alive (as I initially assumed) or instead the total measure of the person that is alive (as philh argued).
There is no requirement to interpret QI in the silly way you and Manfred characterized it. An objective version is simply that some of the total measure of a person will survive.
Quantum Immortality is the idea that you personally will never experience death, because somehow those version of you that die “can’t experience anything” and so don’t count—it’s an idea that can only be believed by those people who have a confused mystical view of Death as a single event on a physical level, instead of the termination of a trillion different individual processes in the brain.
The example that this article provides can on the other hand be simulated (and thus answered on a decision theoretical level very specifically) programmatically by a simple process that undergoes “fork()” and is variably terminated or allowed to continue further.
As such it has nothing to do with the various confusions that the “quantum immortality” believers tend to entertain.
I don’t care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.
Here’s Wikipedia’s description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here’s a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.
Suppose the reality of “quantum immortality” effects is one hypothesis that the robot considers. The probability of observing surviving 10 jumps given the noQI hypothesis is approximately 33%, since the robot could have failed to observe anything. But given the QI hypothesis, the probability of observing surviving 10 jumps is virtually 100%, since some version would survive to observe itself under all three engine-failure conditions. If QI had a prior P(QI)=X before, after the 10 jumps it has a posterior of 3X/(1+2X). So it seems clear that QI is relevant to whether the anthropic evidence can be usefully employed and that the anthropic evidence is relevant to evaluation of QI. Furthermore, it becomes relevant to the robot’s decision whether you value merely the fact the person is alive (as I initially assumed) or instead the total measure of the person that is alive (as philh argued).
There is no requirement to interpret QI in the silly way you and Manfred characterized it. An objective version is simply that some of the total measure of a person will survive.
Quantum Immortality is the idea that you personally will never experience death, because somehow those version of you that die “can’t experience anything” and so don’t count—it’s an idea that can only be believed by those people who have a confused mystical view of Death as a single event on a physical level, instead of the termination of a trillion different individual processes in the brain.
The example that this article provides can on the other hand be simulated (and thus answered on a decision theoretical level very specifically) programmatically by a simple process that undergoes “fork()” and is variably terminated or allowed to continue further.
As such it has nothing to do with the various confusions that the “quantum immortality” believers tend to entertain.
I don’t care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.
Here’s Wikipedia’s description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here’s a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.