I understood your argument as following; anything which is an argument for QI, can also be argument for alien saving us. Thus, nothing is evidence for QI. However, apriory probabilities of QI and alien are not mutually independent. QI increases chances of alien with every round. We can’t observe QI directly. But we will observe the alien and this is what is predicted by QI.
No, the argument is that the traditional (weak) evidence for anthropic shadow is instead evidence of anthropic angel. QI is an example of anthropic angel, not anthropic shadow.
So for example, a statistically implausible number of LHC failures would be evidence for some sort of QI and also other related anthropic angel hypotheses, and they don’t need to be exclusive.
Past LHC failures are just civilization-level QI. (BTW, there are real things like this related to the history of earth atmosphere, in which CO2 content was anti-correlated with Sun’s luminosity which result in stable temperatures). But it is not clear to me, what are other anthropic effects, which are not QI – what do you mean here? Can you provide one more example?
A universe with classical mechanics, except that when you die the universe gets resampled, would be anthropic angelic.
Beings who save you are also anthropic angelic. For example, the fact that you don’t die while driving is because the engineers explicitly tried to minimize your chance of death. You can make inferences based on this. For example, even if you have never crashed, you can reason that during a crash you will endure less damage than other parts of the car, because the engineers wanted to save you more than they wanted to save the parts of the car.
The first idea seems similar to Big World immortality: the concept that due to chaotic inflation, many copies of me exist somewhere, and some of them will not die in any situation. While the copies are the same, the worlds around them could be different, which opens other options for survival: in some worlds, aliens might exist who could save me. The simulation argument can also act as such an anthropic angel, as there will be simulations where I survive. So there can be different observation selection effects that ensure my survival, and it may be difficult to observationally distinguish between them.
Therefore, survival itself is not evidence of MWI, Big World, or simulation. Is that your point?
Regarding the car engineers situation, It is less clear. I know that cars are designed safe, so there is no surprise. Are you suggesting they are anthropic because we are more likely to be driving later in the car evolution timeline when cars are safer?
I suppose the main point you should draw from “Anthropic Blindness” to QI is that:
Quantum Immortality is not a philosophical consequence of MWI, it is an empirical hypothesis with a very low prior (due to complexity).
Death is not special. Assuming you have never gotten a Fedora up to this point, it is consistent to assume that that “Quantum Fedoralessness” is true. That is, if you keep flipping a quantum coin that has a 50% chance of giving you a Fedora, the universe will only have you experience the path that doesn’t give you the Fedora. Since you have never gotten a Fedora yet, you can’t rule this hypothesis out. The silliness of this example demonstrates why we should likewise be skeptical of Quantum Immortality.
If MWI is true, there will be timelines where I survive any risk. This claim is factual and equivalent to MWI, and the only thing that prevents me from regarding it as immortality are questions related to decision theory. If MWI is true, QI has high a priori probability and low associated complexity.
The Fedora case has high complexity and no direct connection to MWI, hence a low a priori probability.
Now for the interesting part: QI becomes distinct from the Fedora case only when the chances are 1 in a trillion.
First example: When 1000 people play Russian roulette and one survives (10 rounds at 0.5), they might think it’s because of QI. (This probability is equivalent to surviving to 100 years old according to the Gompertz law.)
When 1000 people play Quantum Fedora (10 rounds at 0.5), one doesn’t get a Fedora, and they think it’s because they have a special anti-Fedora survival capability. In this case, it’s obvious they’re wrong, and I think this first example is what you’re pointing to.
(I would note that even in this case, one has to update more for QI than for Fedora. In the Fedora case, there will be, say, 1023 copies of me with Fedora after 10 flips of a quantum coin versus 1 copy without Fedora. Thus, I am very unlikely to find myself without a Fedora. This boils down to difficult questions about SSA and SIA and observer selection. Or, in other words: can I treat myself as a random sample, or should I take the fact that I exist without a Fedora as axiomatic? This question arises often in the Doomsday argument, where I treat myself as a random sample despite knowing my date of birth.)
However, the situation is different if one person plays Russian roulette 30 times. In that case, externalization of the experiment becomes impossible: only 8 billion people live on Earth, and there are no known aliens. (This probability is equivalent to surviving to 140 years old according to the Gompertz law.) In this case, even if the entire Earth’s population played Russian roulette, there would be only a 1 percent chance of survival, and the fact of surviving would be surprising. But if QI is true, it isn’t surprising. That is, it’s not surprising to survive to 100 years old, but surviving to 140 is.
Now if I play Fedora roulette 30 times and still have no Fedora, this can be true only in MWI. So if there’s no Fedora after 30 rounds, I get evidence that MWI is true and thus QI is also true. But I am extremely unlikely to find myself in such a situation.
I understood your argument as following; anything which is an argument for QI, can also be argument for alien saving us. Thus, nothing is evidence for QI.
However, apriory probabilities of QI and alien are not mutually independent. QI increases chances of alien with every round. We can’t observe QI directly. But we will observe the alien and this is what is predicted by QI.
No, the argument is that the traditional (weak) evidence for anthropic shadow is instead evidence of anthropic angel. QI is an example of anthropic angel, not anthropic shadow.
So for example, a statistically implausible number of LHC failures would be evidence for some sort of QI and also other related anthropic angel hypotheses, and they don’t need to be exclusive.
Past LHC failures are just civilization-level QI. (BTW, there are real things like this related to the history of earth atmosphere, in which CO2 content was anti-correlated with Sun’s luminosity which result in stable temperatures). But it is not clear to me, what are other anthropic effects, which are not QI – what do you mean here? Can you provide one more example?
A universe with classical mechanics, except that when you die the universe gets resampled, would be anthropic angelic.
Beings who save you are also anthropic angelic. For example, the fact that you don’t die while driving is because the engineers explicitly tried to minimize your chance of death. You can make inferences based on this. For example, even if you have never crashed, you can reason that during a crash you will endure less damage than other parts of the car, because the engineers wanted to save you more than they wanted to save the parts of the car.
The first idea seems similar to Big World immortality: the concept that due to chaotic inflation, many copies of me exist somewhere, and some of them will not die in any situation. While the copies are the same, the worlds around them could be different, which opens other options for survival: in some worlds, aliens might exist who could save me. The simulation argument can also act as such an anthropic angel, as there will be simulations where I survive. So there can be different observation selection effects that ensure my survival, and it may be difficult to observationally distinguish between them.
Therefore, survival itself is not evidence of MWI, Big World, or simulation. Is that your point?
Regarding the car engineers situation, It is less clear. I know that cars are designed safe, so there is no surprise. Are you suggesting they are anthropic because we are more likely to be driving later in the car evolution timeline when cars are safer?
I suppose the main point you should draw from “Anthropic Blindness” to QI is that:
Quantum Immortality is not a philosophical consequence of MWI, it is an empirical hypothesis with a very low prior (due to complexity).
Death is not special. Assuming you have never gotten a Fedora up to this point, it is consistent to assume that that “Quantum Fedoralessness” is true. That is, if you keep flipping a quantum coin that has a 50% chance of giving you a Fedora, the universe will only have you experience the path that doesn’t give you the Fedora. Since you have never gotten a Fedora yet, you can’t rule this hypothesis out. The silliness of this example demonstrates why we should likewise be skeptical of Quantum Immortality.
It is not clear for me why you call
If MWI is true, there will be timelines where I survive any risk. This claim is factual and equivalent to MWI, and the only thing that prevents me from regarding it as immortality are questions related to decision theory. If MWI is true, QI has high a priori probability and low associated complexity.
The Fedora case has high complexity and no direct connection to MWI, hence a low a priori probability.
Now for the interesting part: QI becomes distinct from the Fedora case only when the chances are 1 in a trillion.
First example:
When 1000 people play Russian roulette and one survives (10 rounds at 0.5), they might think it’s because of QI. (This probability is equivalent to surviving to 100 years old according to the Gompertz law.)
When 1000 people play Quantum Fedora (10 rounds at 0.5), one doesn’t get a Fedora, and they think it’s because they have a special anti-Fedora survival capability. In this case, it’s obvious they’re wrong, and I think this first example is what you’re pointing to.
(I would note that even in this case, one has to update more for QI than for Fedora. In the Fedora case, there will be, say, 1023 copies of me with Fedora after 10 flips of a quantum coin versus 1 copy without Fedora. Thus, I am very unlikely to find myself without a Fedora. This boils down to difficult questions about SSA and SIA and observer selection. Or, in other words: can I treat myself as a random sample, or should I take the fact that I exist without a Fedora as axiomatic? This question arises often in the Doomsday argument, where I treat myself as a random sample despite knowing my date of birth.)
However, the situation is different if one person plays Russian roulette 30 times. In that case, externalization of the experiment becomes impossible: only 8 billion people live on Earth, and there are no known aliens. (This probability is equivalent to surviving to 140 years old according to the Gompertz law.) In this case, even if the entire Earth’s population played Russian roulette, there would be only a 1 percent chance of survival, and the fact of surviving would be surprising. But if QI is true, it isn’t surprising. That is, it’s not surprising to survive to 100 years old, but surviving to 140 is.
Now if I play Fedora roulette 30 times and still have no Fedora, this can be true only in MWI. So if there’s no Fedora after 30 rounds, I get evidence that MWI is true and thus QI is also true. But I am extremely unlikely to find myself in such a situation.