This all seems like exploiting ambiguity about what your conditional probabilities are conditional on.
Conditional on “you will be around a supercritical ball of enriched uranium and alive to talk about it,” things get weird, because that’s such a low-probability event to begin with. I suspect I’d still favor theories that involve some kind of unknown/unspecified physical intervention, rather than “the neutrons all happened to miss,” but we should notice that we’re conditioning on a very low probability event and things will get weird.
Conditional on “someone telling me I’m around a supercritical ball of enriched uranium and alive to talk about it,” they’re probably lying or otherwise trolling me.
Conditional on “I live in a universe governed by the standard model and I’m alive to talk about it,” the constants are probably tuned to support life.
Conditional on “the Cold War happened, lasted for a number of decades, and I’m alive to talk about it,” humanity was probably (certainly?) not wiped out.
Once you think about it this way, any counterintuitive implications for prediction go away. For instance, we don’t get to say nuclear cold wars aren’t existentially dangerous because they aren’t if we condition on humanity surviving them—that’s conditioning on the event whose probability we’re trying to calculate! But we also can’t discount “we survived the cold war” as (some sort of) evidence that cold wars might be less dangerous than we thought. For prediction (and evaluation of retro-dictions), the right event to condition on is “having a cold war (but not necessarily surviving it).
On the Cold War thing, I think the lesson to learn depends on whether situations that start with a single nuclear launch reliably (and rapidly) escalate into world-destroying conflicts.
If (nuclear launch) → (rapid extinction), then it seems like the anthropic principle is relevant, and the close calls really might have involved improbable luck.
If, on the other hand, (nuclear launch) → (perhaps lots of death, but usually many survivors), then this suggests the stories of how close the close calls were are exaggerated.
Your ‘if’ statements made me update. I guess there is also a distinction between what conclusions one can draw from this type of anthropic reasoning.
One (maybe naive?) conclusion is that ‘the anthropic principle is protecting us’. If you think the anthropic principle is relevant, then you continue to expect it to allow you to evade extinction.
The other conclusion is that ‘the anthropic perspective is relevant to our past but not our future’. You consider anthropics to be a source of distortion on the historical record, but not a guide to what will happen next. Under this interpretation you would anticipate extinction of [humans / you / other reference class] to be more likely in the future than in the past.
I suspect this split depends on whether you weight your future timelines by how many observers are in them, etc.
This all seems like exploiting ambiguity about what your conditional probabilities are conditional on.
Conditional on “you will be around a supercritical ball of enriched uranium and alive to talk about it,” things get weird, because that’s such a low-probability event to begin with. I suspect I’d still favor theories that involve some kind of unknown/unspecified physical intervention, rather than “the neutrons all happened to miss,” but we should notice that we’re conditioning on a very low probability event and things will get weird.
Conditional on “someone telling me I’m around a supercritical ball of enriched uranium and alive to talk about it,” they’re probably lying or otherwise trolling me.
Conditional on “I live in a universe governed by the standard model and I’m alive to talk about it,” the constants are probably tuned to support life.
Conditional on “the Cold War happened, lasted for a number of decades, and I’m alive to talk about it,” humanity was probably (certainly?) not wiped out.
Once you think about it this way, any counterintuitive implications for prediction go away. For instance, we don’t get to say nuclear cold wars aren’t existentially dangerous because they aren’t if we condition on humanity surviving them—that’s conditioning on the event whose probability we’re trying to calculate! But we also can’t discount “we survived the cold war” as (some sort of) evidence that cold wars might be less dangerous than we thought. For prediction (and evaluation of retro-dictions), the right event to condition on is “having a cold war (but not necessarily surviving it).
On the Cold War thing, I think the lesson to learn depends on whether situations that start with a single nuclear launch reliably (and rapidly) escalate into world-destroying conflicts.
If (nuclear launch) → (rapid extinction), then it seems like the anthropic principle is relevant, and the close calls really might have involved improbable luck.
If, on the other hand, (nuclear launch) → (perhaps lots of death, but usually many survivors), then this suggests the stories of how close the close calls were are exaggerated.
Your ‘if’ statements made me update. I guess there is also a distinction between what conclusions one can draw from this type of anthropic reasoning.
One (maybe naive?) conclusion is that ‘the anthropic principle is protecting us’. If you think the anthropic principle is relevant, then you continue to expect it to allow you to evade extinction.
The other conclusion is that ‘the anthropic perspective is relevant to our past but not our future’. You consider anthropics to be a source of distortion on the historical record, but not a guide to what will happen next. Under this interpretation you would anticipate extinction of [humans / you / other reference class] to be more likely in the future than in the past.
I suspect this split depends on whether you weight your future timelines by how many observers are in them, etc.