My interpretation of Zvi’s point wasn’t that your model should account for past lack of nuclear war, but that it should be sensitive to future lack of nuclear war. I.e., if you try to figure out the probability that nuclear war happens at least once over (e.g.) the next century, then if it doesn’t happen in the next 50 years, you should assign lower probability to it happening in the 50 years after that. I wrote someone a slack message about this exact issue a couple of months ago; I’ll copy it here in case that’s helpful:
So here’s a tricky thing with your probability extrapolation: On a randomly chosen year, actors should give lower probabilities to p(nuclear war in Nyears) than the naive 1-[1-p(nuclear war next year)]^Nyears.
The reason for this is that the absence of nuclear war on any given year is positively correlated with absence of nuclear way on any other given year. This positive correlation yields an increased probability that nuclear war will never happen in the given time period.
One way to recognise this: Say that someone assigns a 50% chance to the annual risk being exactly 0.2, and 50% chance to the annual risk being exactly 0.01. Then their best-guess for the next year is going to be 0.105. If this was the actual annual risk, then the probability of nuclear war over a decade would be 1-(1-0.105)^10 ~= 0.67. But their actual best guess for nuclear war next decade is going to be 0.5*(1-[1-0.2]^10)+0.5*(1-[1-0.01])^10 ~= 0.45
I think one useful framing of this is that, each year that a person sees that nuclear war didn’t happen, they’ll update towards a lower annual risk. So towards the end of the period, this person will have mostly updated away from the chance that the annual risk was 0.2, and they’ll think that the 0.01 estimate is more likely.
This whole phenomena matter a lot more if the risks you’re dealing with are large, than if they’re small. Take the perspective in the most recent paragraph: If the risk is small each year, then each year without nuclear apocalypse won’t update you very much. Without updates, using constant annual probabilities is more reasonable.
To be concrete, if we lived in the year 1950, then I think it’d be reasonable to assign really high probability to nuclear war in the next few decades, but then assume that — if we survive the next few decades — that must be because the risk is low. So the risk over the 200 years isn’t that much higher than the risk over the next few decades.
In the year 2021, we’ve already seen a lot of years without nukes, so we already have good reason to believe that nukes are rare. So we won’t update a lot on seeing a few extra decades without nukes. So extrapolating annual risks over the next few decades seems fine. Extrapolating it all the way to 2100 is a little bit shakier, though. Maybe I’d guess there’d be like 2-10 percentage points difference, depending on how you did it.
Zvi has now put a postscript in the ALLFED section above. We have updated the inadvertent nuclear war fault tree model result based on no nuclear war since the data stopped coming in, and also reduced the annual probability of nuclear war further going forward. And then, so as to not over claim on cost effectiveness, we did not include a correction for non-inadvertent US/Russia nuclear war nor conflict with China. Resilient foods are still highly competitive with AGI safety according to the revised model.
My interpretation of Zvi’s point wasn’t that your model should account for past lack of nuclear war, but that it should be sensitive to future lack of nuclear war. I.e., if you try to figure out the probability that nuclear war happens at least once over (e.g.) the next century, then if it doesn’t happen in the next 50 years, you should assign lower probability to it happening in the 50 years after that. I wrote someone a slack message about this exact issue a couple of months ago; I’ll copy it here in case that’s helpful:
Zvi has now put a postscript in the ALLFED section above. We have updated the inadvertent nuclear war fault tree model result based on no nuclear war since the data stopped coming in, and also reduced the annual probability of nuclear war further going forward. And then, so as to not over claim on cost effectiveness, we did not include a correction for non-inadvertent US/Russia nuclear war nor conflict with China. Resilient foods are still highly competitive with AGI safety according to the revised model.
woop!