Given the stakes, none of the probabilities you mention have enough nines to make me comfortable. I would recommend doing basically the same thing in all cases, which is to produce a decent argument and convince a bunch of physicists, then convince the military leadership of the manhattan project, then convince the political leadership.
Experimentally, my wrong-physics-subfield guess is you’re going to want to do things like measure interaction cross-sections of nitrogen nuclei. You can do this with a room-sized particle accelerator, I think.
I came up with this thought experiment after realizing just how wild it is that the Manhattan project scientists didn’t have your reaction. Even a one-in-ten-thousand chance seems reckless.
“Exactly,” Compton said, and with that gravity! “It would be the ultimate catastrophe. Better to accept the slavery of the Nazis than to run the chance of drawing the final curtain on mankind!”
When Teller informed some of his colleagues of this possibility, he was greeted with both skepticism and fear. Hans Bethe immediately dismissed the idea, but according to author Pearl Buck, Nobel Prize-winning physicist Arthur Compton was so concerned that he told Robert Oppenheimer that if there were even the slightest chance of this “ultimate catastrophe” playing out, all work on the bomb should stop.
So a study was commissioned to explore the matter in detail, and six months before the Trinity test, the very first detonation of nuclear device, Edward Teller and Emil Konopinski announced their findings in a report with the ominous title “Ignition of the Atmosphere With Nuclear Bombs.”
“It is shown that, whatever the temperature to which a section of the atmosphere may be heated, no self-propagating chain of nuclear reactions is likely to be started. The energy losses to radiation always overcompensate the gains due to the reactions.”
As far as I know, they satisfied themselves with evidence many orders of magnitude stronger than 0.01% probability.
Accounting for this (if you wanted to give a probability level that corresponds to the state of evidence after inquiries were made) would require some extra diligence in how you use probabilities—it’s easy to write a post with probabilities in it and say “50%,” but it’s hard to estimate how many bits of evidence is in a calculation of the required conditions for nitrogen fusion. But I would guess they were at about 0.00000000000001%.
They may have estimated a probability on that order of magnitude (informally), but given what we know now of how accurate their models were at the time, they were unjustified in such level of confidence. There are models not much further from their models than our modern ones are, in which the atmosphere could have been ignited by a nuclear weapon.
It’s not that the chances of actually igniting the atmosphere were greater than they thought: they weren’t. They got the right answer.
However, they got it by overconfident reasoning on some wrong measurements and a model that was ignorant of some processes that we now know to be important. They got the right answer by accident. If some trusted oracle had told them the true margins of error in their calculations (but not the signs), I suspect hope they would have been appalled and cancelled the project.
I don’t have a cite unfortunately, so feel free to take it with credence approximately zero. Bethe’s ignition calculations were a topic of public discussion at the time, and came up during an astrophysics tutorial. In the next session, the tutor produced an article from the 80′s showing that some cross section figures that would have been cutting edge knowledge in the early 40′s and likely used in the safety calculation were incorrect by three orders of magnitude, but the error bars at the time were less than 30%. Fortunately the true figures were for much lower cross section (and therefore chance of ignition), but if the true figures had been erroneous by a similar magnitude of error in the opposite direction, it would have been unacceptably risky.
It was used as a lesson to us that confidence intervals only have any validity at all if your model is correct in the first place. The lesson I took away from it was that when you’re doing research, you should expect that your model is not correct in some important way, and you should be very much more cautious about how safe something is.
Huh, interesting. Definitely an example of why multiple semi-independent lines of evidence is a good idea. I wonder if you could get the relative rates of hydrogen and nitrogen fusion out of the distribution of elements in the sun, even without having to know its age… except of course we’re made out of leftovers, which means you can only put a bound on it.
So for the purposes of my thought experiment, imagine that they concluded that there was a small but non-zero chance, at the level where it’s still meaningful to consider the moral weight of losing millions of lives in a battle over Japan’s home islands, or losing a post-war arms race to the Soviets. Presumably the question would be answered by the Soviets eventually anyway unless there was an international agreement. Also, for the sake argument we can consider any atmospheric nuclear test to settle the question once and for all. (We don’t have to ask this question again for each new bomb. Once any bomb explodes and we don’t die, the question is answered.)
Given the stakes, none of the probabilities you mention have enough nines to make me comfortable. I would recommend doing basically the same thing in all cases, which is to produce a decent argument and convince a bunch of physicists, then convince the military leadership of the manhattan project, then convince the political leadership.
Experimentally, my wrong-physics-subfield guess is you’re going to want to do things like measure interaction cross-sections of nitrogen nuclei. You can do this with a room-sized particle accelerator, I think.
I came up with this thought experiment after realizing just how wild it is that the Manhattan project scientists didn’t have your reaction. Even a one-in-ten-thousand chance seems reckless.
They had almost precisely his reaction:
https://www.insidescience.org/manhattan-project-legacy/atmosphere-on-fire
https://www.realclearscience.com/blog/2019/09/12/the_fear_that_a_nuclear_bomb_could_ignite_the_atmosphere.html
As far as I know, they satisfied themselves with evidence many orders of magnitude stronger than 0.01% probability.
Accounting for this (if you wanted to give a probability level that corresponds to the state of evidence after inquiries were made) would require some extra diligence in how you use probabilities—it’s easy to write a post with probabilities in it and say “50%,” but it’s hard to estimate how many bits of evidence is in a calculation of the required conditions for nitrogen fusion. But I would guess they were at about 0.00000000000001%.
They may have estimated a probability on that order of magnitude (informally), but given what we know now of how accurate their models were at the time, they were unjustified in such level of confidence. There are models not much further from their models than our modern ones are, in which the atmosphere could have been ignited by a nuclear weapon.
It’s not that the chances of actually igniting the atmosphere were greater than they thought: they weren’t. They got the right answer.
However, they got it by overconfident reasoning on some wrong measurements and a model that was ignorant of some processes that we now know to be important. They got the right answer by accident. If some trusted oracle had told them the true margins of error in their calculations (but not the signs), I
suspecthope they would have been appalled and cancelled the project.Source?
I don’t have a cite unfortunately, so feel free to take it with credence approximately zero. Bethe’s ignition calculations were a topic of public discussion at the time, and came up during an astrophysics tutorial. In the next session, the tutor produced an article from the 80′s showing that some cross section figures that would have been cutting edge knowledge in the early 40′s and likely used in the safety calculation were incorrect by three orders of magnitude, but the error bars at the time were less than 30%. Fortunately the true figures were for much lower cross section (and therefore chance of ignition), but if the true figures had been erroneous by a similar magnitude of error in the opposite direction, it would have been unacceptably risky.
It was used as a lesson to us that confidence intervals only have any validity at all if your model is correct in the first place. The lesson I took away from it was that when you’re doing research, you should expect that your model is not correct in some important way, and you should be very much more cautious about how safe something is.
Huh, interesting. Definitely an example of why multiple semi-independent lines of evidence is a good idea. I wonder if you could get the relative rates of hydrogen and nitrogen fusion out of the distribution of elements in the sun, even without having to know its age… except of course we’re made out of leftovers, which means you can only put a bound on it.
So for the purposes of my thought experiment, imagine that they concluded that there was a small but non-zero chance, at the level where it’s still meaningful to consider the moral weight of losing millions of lives in a battle over Japan’s home islands, or losing a post-war arms race to the Soviets. Presumably the question would be answered by the Soviets eventually anyway unless there was an international agreement. Also, for the sake argument we can consider any atmospheric nuclear test to settle the question once and for all. (We don’t have to ask this question again for each new bomb. Once any bomb explodes and we don’t die, the question is answered.)