There was never “a probability of slightly less than three parts in a million,”… Ignition is not a matter of probabilities; it is simply impossible.
I am not sure if the phrasing was intentional, but he may have been dodging the intended question. He sounds lie he’s using “probability” in the sampling-from-a-distribution sense, not the Bayesian-epistemic sense. Yes, ignition is impossible, in that if you set off an atomic bomb a million times you don’t get three dead Earths. No, this does not necessarily mean that the epistemic state occupied by the Manhattan Project scientists pre-Trinity test included evidence of that physical fact sufficient to bet on it at odds better than 333k:1.
I think this points at my general question: In each of the past cases, given the available data and tools, what is the lowest probability the scientists involved could have arrived it, before it was indistinguishable from zero? I could be convinced that there was nothing the Manhattan Project scientists could have learned or tested with the tools and time available to get their estimate much below 3e-6, whereas CERN had much better theoretical, computational, and experimental support (with less time pressure) to enable them to push to 2e-8 or lower.
SETI discussions don’t seem to ever have quantified risk estimates, and I can think of several reasons for that. My own conclusion there has always been something like “No aliens powerful enough to be a threat need us to announce our presence. They knew we were here before we did.”
The fact that AI researchers do make these estimates, and they’re 3-12 OOMs larger than the estimates in any of the other cases, is really all we should need to know to understand how different the cases are from each other.
I am not sure if the phrasing was intentional, but he may have been dodging the intended question. He sounds lie he’s using “probability” in the sampling-from-a-distribution sense, not the Bayesian-epistemic sense. Yes, ignition is impossible, in that if you set off an atomic bomb a million times you don’t get three dead Earths. No, this does not necessarily mean that the epistemic state occupied by the Manhattan Project scientists pre-Trinity test included evidence of that physical fact sufficient to bet on it at odds better than 333k:1.
I think this points at my general question: In each of the past cases, given the available data and tools, what is the lowest probability the scientists involved could have arrived it, before it was indistinguishable from zero? I could be convinced that there was nothing the Manhattan Project scientists could have learned or tested with the tools and time available to get their estimate much below 3e-6, whereas CERN had much better theoretical, computational, and experimental support (with less time pressure) to enable them to push to 2e-8 or lower.
SETI discussions don’t seem to ever have quantified risk estimates, and I can think of several reasons for that. My own conclusion there has always been something like “No aliens powerful enough to be a threat need us to announce our presence. They knew we were here before we did.”
The fact that AI researchers do make these estimates, and they’re 3-12 OOMs larger than the estimates in any of the other cases, is really all we should need to know to understand how different the cases are from each other.