The substantive complaint was that they [ALLFED] did an invalid calculation when calculating the annual probability of nuclear war. They did a survey to establish a range of probabilities, then they averaged them. One could argue about what kinds of ‘average them’ moves work for the first year, but over time the lack of a nuclear war is Bayesian evidence in favor of lower probabilities and against higher probabilities. It’s incorrect to not adjust for this, and the complaint was not merely the error, but that the error was pointed out and not corrected.
Tl; dr: ALLFED appreciates the feedback. We disagree that it was a mistake—there were smart people on both sides of this issue. Good epistemics are very important to ALLFED.
Full version:
Zvi is investigating the issue. I won’t name names, but suffice it to say, there were smart people disagreeing on this issue. We have been citing the fault tree analysis of the probability of nuclear war, which we think is the most rigorous study because it uses actual data. Someone did suggest that we should update the probability estimate based on the fact that nuclear war has not yet occurred (excluding World War II). Taking a look at the paper itself (see the top of page 9 and equation (5) on that page), for conditional probabilities of occurrence for which effectively zero historical occurrences have been observed out of n total cases when it could have occurred, the probability in the model was updated according to a Bayesian posterior distribution with a uniform prior and binomial likelihood function. Historical occurrences updated in this way were A) the conditional probability that Threat Assessment Conference (TAC)-level attack indicators will be promoted to a Missile Attack Conference (MAC), and (B) the conditional probability of leaders’ decision to launch in response to mistaken MAC-level indicators of being under attack. Based on this methodology, it would be double-counting to update their final distribution further based on the historical absence of accidental nuclear launches over the last 76 years.
But what we do agree on is that if one starts with a high prior, one should update. And that’s what was done by one of our coauthors for his model of the probability of nuclear war, and he got similar results to the fault tree analysis. Furthermore, the fault tree analysis was only for inadvertent nuclear war (one side thinking they are being attacked, and then “retaliating”). However, there are other mechanisms for nuclear war, including intentional attack, and accidental detonation of a nuclear weapon and escalation from there. Furthermore, though many people consider nuclear winter only possible for a US-Russia nuclear war, now that China has a greater purchasing power parity than the US, we think there is comparable combustible material there. So the possibility of US-China nuclear war or Russia-China nuclear war further increases probabilities. So even if there should be some updating downward on the inadvertent US-Russia nuclear war, I think the fault tree analysis still provides a reasonable estimate. I also explained this on my first 80k podcast.
Also, we say in the paper, “Considering uncertainty represented within our models, our result is robust: reverting the conclusion required simultaneously changing the 3-5 most important parameters to the pessimistic ends.” So as Zvi has recognized, even if one thinks the probability of nuclear war should be significantly lower, the overall conclusion doesn’t change. We have encouraged people to put their own estimates in.
Again, we really appreciate the feedback. Good epistemics are very important to us. We are trying to reach the truth. We want to have maximum positive impact on the world, so that’s why we spend a significant amount of time on prioritization.
For clarity: Investigating this further is on my stack, but due to Omicron my stack doth overflow, so I don’t know how long it will take me to get to it.
My interpretation of Zvi’s point wasn’t that your model should account for past lack of nuclear war, but that it should be sensitive to future lack of nuclear war. I.e., if you try to figure out the probability that nuclear war happens at least once over (e.g.) the next century, then if it doesn’t happen in the next 50 years, you should assign lower probability to it happening in the 50 years after that. I wrote someone a slack message about this exact issue a couple of months ago; I’ll copy it here in case that’s helpful:
So here’s a tricky thing with your probability extrapolation: On a randomly chosen year, actors should give lower probabilities to p(nuclear war in Nyears) than the naive 1-[1-p(nuclear war next year)]^Nyears.
The reason for this is that the absence of nuclear war on any given year is positively correlated with absence of nuclear way on any other given year. This positive correlation yields an increased probability that nuclear war will never happen in the given time period.
One way to recognise this: Say that someone assigns a 50% chance to the annual risk being exactly 0.2, and 50% chance to the annual risk being exactly 0.01. Then their best-guess for the next year is going to be 0.105. If this was the actual annual risk, then the probability of nuclear war over a decade would be 1-(1-0.105)^10 ~= 0.67. But their actual best guess for nuclear war next decade is going to be 0.5*(1-[1-0.2]^10)+0.5*(1-[1-0.01])^10 ~= 0.45
I think one useful framing of this is that, each year that a person sees that nuclear war didn’t happen, they’ll update towards a lower annual risk. So towards the end of the period, this person will have mostly updated away from the chance that the annual risk was 0.2, and they’ll think that the 0.01 estimate is more likely.
This whole phenomena matter a lot more if the risks you’re dealing with are large, than if they’re small. Take the perspective in the most recent paragraph: If the risk is small each year, then each year without nuclear apocalypse won’t update you very much. Without updates, using constant annual probabilities is more reasonable.
To be concrete, if we lived in the year 1950, then I think it’d be reasonable to assign really high probability to nuclear war in the next few decades, but then assume that — if we survive the next few decades — that must be because the risk is low. So the risk over the 200 years isn’t that much higher than the risk over the next few decades.
In the year 2021, we’ve already seen a lot of years without nukes, so we already have good reason to believe that nukes are rare. So we won’t update a lot on seeing a few extra decades without nukes. So extrapolating annual risks over the next few decades seems fine. Extrapolating it all the way to 2100 is a little bit shakier, though. Maybe I’d guess there’d be like 2-10 percentage points difference, depending on how you did it.
Zvi has now put a postscript in the ALLFED section above. We have updated the inadvertent nuclear war fault tree model result based on no nuclear war since the data stopped coming in, and also reduced the annual probability of nuclear war further going forward. And then, so as to not over claim on cost effectiveness, we did not include a correction for non-inadvertent US/Russia nuclear war nor conflict with China. Resilient foods are still highly competitive with AGI safety according to the revised model.
Tl; dr: ALLFED appreciates the feedback. We disagree that it was a mistake—there were smart people on both sides of this issue. Good epistemics are very important to ALLFED.
Full version:
Zvi is investigating the issue. I won’t name names, but suffice it to say, there were smart people disagreeing on this issue. We have been citing the fault tree analysis of the probability of nuclear war, which we think is the most rigorous study because it uses actual data. Someone did suggest that we should update the probability estimate based on the fact that nuclear war has not yet occurred (excluding World War II). Taking a look at the paper itself (see the top of page 9 and equation (5) on that page), for conditional probabilities of occurrence for which effectively zero historical occurrences have been observed out of n total cases when it could have occurred, the probability in the model was updated according to a Bayesian posterior distribution with a uniform prior and binomial likelihood function. Historical occurrences updated in this way were A) the conditional probability that Threat Assessment Conference (TAC)-level attack indicators will be promoted to a Missile Attack Conference (MAC), and (B) the conditional probability of leaders’ decision to launch in response to mistaken MAC-level indicators of being under attack. Based on this methodology, it would be double-counting to update their final distribution further based on the historical absence of accidental nuclear launches over the last 76 years.
But what we do agree on is that if one starts with a high prior, one should update. And that’s what was done by one of our coauthors for his model of the probability of nuclear war, and he got similar results to the fault tree analysis. Furthermore, the fault tree analysis was only for inadvertent nuclear war (one side thinking they are being attacked, and then “retaliating”). However, there are other mechanisms for nuclear war, including intentional attack, and accidental detonation of a nuclear weapon and escalation from there. Furthermore, though many people consider nuclear winter only possible for a US-Russia nuclear war, now that China has a greater purchasing power parity than the US, we think there is comparable combustible material there. So the possibility of US-China nuclear war or Russia-China nuclear war further increases probabilities. So even if there should be some updating downward on the inadvertent US-Russia nuclear war, I think the fault tree analysis still provides a reasonable estimate. I also explained this on my first 80k podcast.
Also, we say in the paper, “Considering uncertainty represented within our models, our result is robust: reverting the conclusion required simultaneously changing the 3-5 most important parameters to the pessimistic ends.” So as Zvi has recognized, even if one thinks the probability of nuclear war should be significantly lower, the overall conclusion doesn’t change. We have encouraged people to put their own estimates in.
Again, we really appreciate the feedback. Good epistemics are very important to us. We are trying to reach the truth. We want to have maximum positive impact on the world, so that’s why we spend a significant amount of time on prioritization.
For clarity: Investigating this further is on my stack, but due to Omicron my stack doth overflow, so I don’t know how long it will take me to get to it.
My interpretation of Zvi’s point wasn’t that your model should account for past lack of nuclear war, but that it should be sensitive to future lack of nuclear war. I.e., if you try to figure out the probability that nuclear war happens at least once over (e.g.) the next century, then if it doesn’t happen in the next 50 years, you should assign lower probability to it happening in the 50 years after that. I wrote someone a slack message about this exact issue a couple of months ago; I’ll copy it here in case that’s helpful:
Zvi has now put a postscript in the ALLFED section above. We have updated the inadvertent nuclear war fault tree model result based on no nuclear war since the data stopped coming in, and also reduced the annual probability of nuclear war further going forward. And then, so as to not over claim on cost effectiveness, we did not include a correction for non-inadvertent US/Russia nuclear war nor conflict with China. Resilient foods are still highly competitive with AGI safety according to the revised model.
woop!