The figure of 3⁄1,000,000 for the probability of the trinity nuke destroying the world is almost certainly too low. Consider that, subjectively, the scientists should have assigned at least a 1 in 1000 probability that they’d made a mistake in their calculation of safety. Probably more like 1 in 100, considering that the technology was entirely new. In fact, the first serious mistake in a physical calculation that resulted in an actual disaster involving a nuke was Castle Bravo, which occurred probably only 50-150 detonations after trinity. Since then, we have have Chernobyl, which is arguably somewhat different, but still 10,000 dead, and apparently set off by scientists who thought what they were doing was safe.
Another way to look at it is to ask yourself, if 100 similar incidents occurred (that is, instances of scientists developing a new and very destructive technology in wartime, and worrying that it just might blow the whole world up but that it’s probably OK), how many instances of “fail” would you expect?
Looking at it this way, even 1 in 100 is too optimistic. The dominant failure mode is that the scientists fail to grasp a crucial consideration, like trying to build a military super-intelligence without understanding the need for friendly AI, which I suspect occurs with probability 1 in 8.
In fact, now that I think about it, the dominant failure mode is that the overall project leadership fails to listen to those scientists who say they might have found a crucial consideration, if there is one.
The figure of 3⁄1,000,000 for the probability of the trinity nuke destroying the world is almost certainly too low. Consider that, subjectively, the scientists should have assigned at least a 1 in 1000 probability that they’d made a mistake in their calculation of safety. Probably more like 1 in 100, considering that the technology was entirely new. In fact, the first serious mistake in a physical calculation that resulted in an actual disaster involving a nuke was Castle Bravo, which occurred probably only 50-150 detonations after trinity. Since then, we have have Chernobyl, which is arguably somewhat different, but still 10,000 dead, and apparently set off by scientists who thought what they were doing was safe.
Another way to look at it is to ask yourself, if 100 similar incidents occurred (that is, instances of scientists developing a new and very destructive technology in wartime, and worrying that it just might blow the whole world up but that it’s probably OK), how many instances of “fail” would you expect?
Looking at it this way, even 1 in 100 is too optimistic. The dominant failure mode is that the scientists fail to grasp a crucial consideration, like trying to build a military super-intelligence without understanding the need for friendly AI, which I suspect occurs with probability 1 in 8.
In fact, now that I think about it, the dominant failure mode is that the overall project leadership fails to listen to those scientists who say they might have found a crucial consideration, if there is one.