As far as I know, they satisfied themselves with evidence many orders of magnitude stronger than 0.01% probability.
Accounting for this (if you wanted to give a probability level that corresponds to the state of evidence after inquiries were made) would require some extra diligence in how you use probabilities—it’s easy to write a post with probabilities in it and say “50%,” but it’s hard to estimate how many bits of evidence is in a calculation of the required conditions for nitrogen fusion. But I would guess they were at about 0.00000000000001%.
They may have estimated a probability on that order of magnitude (informally), but given what we know now of how accurate their models were at the time, they were unjustified in such level of confidence. There are models not much further from their models than our modern ones are, in which the atmosphere could have been ignited by a nuclear weapon.
It’s not that the chances of actually igniting the atmosphere were greater than they thought: they weren’t. They got the right answer.
However, they got it by overconfident reasoning on some wrong measurements and a model that was ignorant of some processes that we now know to be important. They got the right answer by accident. If some trusted oracle had told them the true margins of error in their calculations (but not the signs), I suspect hope they would have been appalled and cancelled the project.
I don’t have a cite unfortunately, so feel free to take it with credence approximately zero. Bethe’s ignition calculations were a topic of public discussion at the time, and came up during an astrophysics tutorial. In the next session, the tutor produced an article from the 80′s showing that some cross section figures that would have been cutting edge knowledge in the early 40′s and likely used in the safety calculation were incorrect by three orders of magnitude, but the error bars at the time were less than 30%. Fortunately the true figures were for much lower cross section (and therefore chance of ignition), but if the true figures had been erroneous by a similar magnitude of error in the opposite direction, it would have been unacceptably risky.
It was used as a lesson to us that confidence intervals only have any validity at all if your model is correct in the first place. The lesson I took away from it was that when you’re doing research, you should expect that your model is not correct in some important way, and you should be very much more cautious about how safe something is.
Huh, interesting. Definitely an example of why multiple semi-independent lines of evidence is a good idea. I wonder if you could get the relative rates of hydrogen and nitrogen fusion out of the distribution of elements in the sun, even without having to know its age… except of course we’re made out of leftovers, which means you can only put a bound on it.
So for the purposes of my thought experiment, imagine that they concluded that there was a small but non-zero chance, at the level where it’s still meaningful to consider the moral weight of losing millions of lives in a battle over Japan’s home islands, or losing a post-war arms race to the Soviets. Presumably the question would be answered by the Soviets eventually anyway unless there was an international agreement. Also, for the sake argument we can consider any atmospheric nuclear test to settle the question once and for all. (We don’t have to ask this question again for each new bomb. Once any bomb explodes and we don’t die, the question is answered.)
As far as I know, they satisfied themselves with evidence many orders of magnitude stronger than 0.01% probability.
Accounting for this (if you wanted to give a probability level that corresponds to the state of evidence after inquiries were made) would require some extra diligence in how you use probabilities—it’s easy to write a post with probabilities in it and say “50%,” but it’s hard to estimate how many bits of evidence is in a calculation of the required conditions for nitrogen fusion. But I would guess they were at about 0.00000000000001%.
They may have estimated a probability on that order of magnitude (informally), but given what we know now of how accurate their models were at the time, they were unjustified in such level of confidence. There are models not much further from their models than our modern ones are, in which the atmosphere could have been ignited by a nuclear weapon.
It’s not that the chances of actually igniting the atmosphere were greater than they thought: they weren’t. They got the right answer.
However, they got it by overconfident reasoning on some wrong measurements and a model that was ignorant of some processes that we now know to be important. They got the right answer by accident. If some trusted oracle had told them the true margins of error in their calculations (but not the signs), I
suspecthope they would have been appalled and cancelled the project.Source?
I don’t have a cite unfortunately, so feel free to take it with credence approximately zero. Bethe’s ignition calculations were a topic of public discussion at the time, and came up during an astrophysics tutorial. In the next session, the tutor produced an article from the 80′s showing that some cross section figures that would have been cutting edge knowledge in the early 40′s and likely used in the safety calculation were incorrect by three orders of magnitude, but the error bars at the time were less than 30%. Fortunately the true figures were for much lower cross section (and therefore chance of ignition), but if the true figures had been erroneous by a similar magnitude of error in the opposite direction, it would have been unacceptably risky.
It was used as a lesson to us that confidence intervals only have any validity at all if your model is correct in the first place. The lesson I took away from it was that when you’re doing research, you should expect that your model is not correct in some important way, and you should be very much more cautious about how safe something is.
Huh, interesting. Definitely an example of why multiple semi-independent lines of evidence is a good idea. I wonder if you could get the relative rates of hydrogen and nitrogen fusion out of the distribution of elements in the sun, even without having to know its age… except of course we’re made out of leftovers, which means you can only put a bound on it.
So for the purposes of my thought experiment, imagine that they concluded that there was a small but non-zero chance, at the level where it’s still meaningful to consider the moral weight of losing millions of lives in a battle over Japan’s home islands, or losing a post-war arms race to the Soviets. Presumably the question would be answered by the Soviets eventually anyway unless there was an international agreement. Also, for the sake argument we can consider any atmospheric nuclear test to settle the question once and for all. (We don’t have to ask this question again for each new bomb. Once any bomb explodes and we don’t die, the question is answered.)