Now: we should probably update on the fact that Russia’s invading Ukraine, and the West is sanctioning Russia over that. The question is, how big an update is that? Bayes’ rule says we multiply the odds by the likelihood ratio: that is, the ratio between the probability of something like the current conflict happening given nuclear escalation this year, and the probability of something like the current conflict happening given no nuclear escalation this year. I’ll treat those two separately.
Is Bayes’ rule even computationally useful for this kind of evidence? 🤔
I feel like usually with Bayes, the evidence is causally downstream from the hypothesis. This makes it sensible to compute the likelihoods, because one can “roll out” the dynamics from the hypothesis and count out the results. But in this case, it doesn’t really make sense to “roll out” nuclear war or lack thereof, since it happens after the evidence, rather than before it.
Of course you attempt to do it anyway, so I should probably address that:
So: what’s the probability of something like the Ukraine situation given a nuclear exchange this year? I’d actually expect a nuclear exchange to be precipitated by somewhat more direct conflict, rather than something more proxy-like. For instance, maybe we’d expect Russia to talk about how Estonia is rightfully theirs, and how it shouldn’t even be a big deal to NATO, rather than the current world where the focus has been on Ukraine specifically for a while. So I’d give this conditional probability as 1⁄3, which is about 3⁄10.
I’m not sure where these numbers come from. They look like they come from your general impression of this stuff, but what’s the advantage of using Bayes’ rule for this general impression, over just taking the general impression the opposite way, making up a number for the updates due to the current war?
What’s the probability of something like the Ukraine situation given no nuclear exchange this year? Luckily, we can actually empirically estimate this, by looking at all the years NATO and Russia haven’t had a nuclear exchange, and seeing how many of them had something like this Ukraine situation. I’d count the NATO bombing of Yugoslavia, the initial invasion of Ukraine, the Cuban missile crisis, and the Russian invasion of Afghanistan. Let’s say Yugoslavia counts for 1 year, Ukraine 1 counts for 1 year, Cuba counts for 1 year, and Afghanistan counts for 3 years (Wikipedia tells me that the invasion lasted 10 years, but I’ve got to assume for most of those years NATO and Russia had figured out that they weren’t going to nuke each other). So, that’s 6 years out of 70, but Laplace’s law says we should add pseudocounts to make that probability 7⁄71, which is about 1⁄10.
I suppose this side of the calculation is more sensible, since you can sort of get historical data on it. But the historical data assumes that it doesn’t change over time, which I’m not sure I buy.
Is Bayes’ rule even computationally useful for this kind of evidence? 🤔
It’s at least valid, and I think I made use of it.
I’m not sure where these numbers come from. They look like they come from your general impression of this stuff, but what’s the advantage of using Bayes’ rule for this general impression, over just taking the general impression the opposite way, making up a number for the updates due to the current war?
The advantage is that I didn’t have to make up the other half of the likelihood ratio, which I would have had to do if I just made up the update.
I suppose this side of the calculation is more sensible, since you can sort of get historical data on it. But the historical data assumes that it doesn’t change over time, which I’m not sure I buy.
My sense is that incorporating possible changes over time will be significantly harder and not actually change the answer all that much.
Is Bayes’ rule even computationally useful for this kind of evidence? 🤔
I feel like usually with Bayes, the evidence is causally downstream from the hypothesis. This makes it sensible to compute the likelihoods, because one can “roll out” the dynamics from the hypothesis and count out the results. But in this case, it doesn’t really make sense to “roll out” nuclear war or lack thereof, since it happens after the evidence, rather than before it.
Of course you attempt to do it anyway, so I should probably address that:
I’m not sure where these numbers come from. They look like they come from your general impression of this stuff, but what’s the advantage of using Bayes’ rule for this general impression, over just taking the general impression the opposite way, making up a number for the updates due to the current war?
I suppose this side of the calculation is more sensible, since you can sort of get historical data on it. But the historical data assumes that it doesn’t change over time, which I’m not sure I buy.
It’s at least valid, and I think I made use of it.
The advantage is that I didn’t have to make up the other half of the likelihood ratio, which I would have had to do if I just made up the update.
My sense is that incorporating possible changes over time will be significantly harder and not actually change the answer all that much.