Again, anthropics is basically generalizing from one example. Yes, humans have dodged an x-risk bullet a few times so far. There was no nuclear war. The atmosphere didn’t explode when the first nuclear bomb was detonated (something that happens to white dwarfs in binary systems, leading to some supernovae explosion). The black plague pandemic did not wipe out nearly everyone, etc. If we have a reference class of x-risks and assign the probability of a close call p to each member of the class, then all we know is that after observing n close calls the probability of no extinction would be p^n. If the number is vanishingly small, we might want to reconsider our estimate of p (“the world is safer than we thought”). Or maybe the reference class is not constructed correctly. Or maybe we truly got luckier than other hypothetical observable civilizations who didn’t make it. Or maybe quantum immortality is a thing. Or maybe something else. After all, there is only one example, and until we observe some other civilizations actually not making it through, anthropics is groundless theorizing. Maybe we can gain more insights into the reference classes and the probabilities of a close call, and of surviving an even from studying near extinction events roughly fitting into the same reference class (past asteroid strikes, plagues, climate changes, …). However, none of the useful information comes from guessing the size of the universe, of whether we are in a simulation, of “updating based on the fact that we exist” beyond accounting for the close calls and x-risk events.
That said, I certainly agree with your point 4. That only the observed data need to be accounted for.
Again, anthropics is basically generalizing from one example. Yes, humans have dodged an x-risk bullet a few times so far. There was no nuclear war. The atmosphere didn’t explode when the first nuclear bomb was detonated (something that happens to white dwarfs in binary systems, leading to some supernovae explosion). The black plague pandemic did not wipe out nearly everyone, etc. If we have a reference class of x-risks and assign the probability of a close call p to each member of the class, then all we know is that after observing n close calls the probability of no extinction would be p^n. If the number is vanishingly small, we might want to reconsider our estimate of p (“the world is safer than we thought”). Or maybe the reference class is not constructed correctly. Or maybe we truly got luckier than other hypothetical observable civilizations who didn’t make it. Or maybe quantum immortality is a thing. Or maybe something else. After all, there is only one example, and until we observe some other civilizations actually not making it through, anthropics is groundless theorizing. Maybe we can gain more insights into the reference classes and the probabilities of a close call, and of surviving an even from studying near extinction events roughly fitting into the same reference class (past asteroid strikes, plagues, climate changes, …). However, none of the useful information comes from guessing the size of the universe, of whether we are in a simulation, of “updating based on the fact that we exist” beyond accounting for the close calls and x-risk events.
That said, I certainly agree with your point 4. That only the observed data need to be accounted for.
The reason I assume those is so that only the “standard” updating remain—I’m deliberately removing the anthropically weird cases.