Why assign a 90% probability to chain reactions being impossible or unfeasible? How should Fermi have known that, especially when it was false?
EDIT: Be careful with your arguments that Fermi should have assigned the false fact ‘chain reactions are impossible’ an even more extreme probability than 90%. You are training your brain to assign higher and more extreme probabilities to things that are false. You should be looking for potential heuristics that should have fired in the opposite direction. There’s such a thing as overfitting, but there’s also such a thing as being cleverly contrarian about reasons why nobody could possibly have figured out X and thus training your brain in the opposite direction of each example.
Because ordinary matter is stable, and the Earth (and, for more anthropically stable evidence, the other planets) hadn’t gone up in a nuclear chain reaction already?
Without using hindsight, one might presume that a universe in which nuclear chain reactions were possible would be one in which it happened to ordinary matter under normal conditions, or else only to totally unstable elements, not one in which it barely worked in highly concentrated forms of particular not-very-radioactive isotopes. This also explains his presumption that even if it worked, it would be highly impractical: given the orders of magnitude of uncertainty, it seemed like “chain reactions don’t naturally occur but they’re possible to engineer on practical scales” is represented by a narrow band of the possible parameters.
I admit that I don’t know what evidence Fermi did and didn’t have at the time, but I’d be surprised if Szilard’s conclusions were as straightforward an implication of current knowledge as nanotech seems to be of today’s current knowledge.
Strictly speaking, chain reactions do naturally occur, they’re just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn’t have that evidence available.
Also, although I like your argument… wouldn’t it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn’t burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?
One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.
Consider also the nature of the first heap: Purified uranium and a graphite moderator in such large quantities that the neutron multiplication factor was driven just over one. Elements which were less stable than uranium decayed earlier in Earth’s history; elements more stable than this would not be suitable for fission. But the heap produced plutonium by its internal reactions, which could be purified chemically and then fizzed. All this was a difficult condition to obtain, but predictable that human intelligence would seek out such points in possibility-space selectively and create them—that humans would create exotic intermediate conditions not existing in nature, by which the remaining sorts of materials would fizz for the first time, and that such conditions indeed might be expected to exist, because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention, with a wide space of possibilities for which elements you could try. Or to then simplify this conclusion: “Of course it wouldn’t exist in nature! Those bombs went off a long time ago, we’ll have to build a slightly different sort! We’re not restricted to bombs that grow on trees.” By such reasoning, if you had attended to it, you might have correctly agreed with Szilard, and been correctly skeptical of Fermi’s hypothetical counterargument.
Not taking into account that engineering intelligence will be applied to overcome the first hypothetical difficulty is, indeed, a source of systematic directional pessimistic bias in long-term technological forecasts. Though in this case it was only a decade. I think if Fermi had said that things were 30 years off and Szilard had said 10, I would’ve been a tad more sympathetic toward Fermi because of the obvious larger reference class—though I would still be trying not to update my brain in the opposite direction from the training example.
because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention
Except there aren’t any that are not eliminated by, say, 10 billion years. And even 40 million years eliminate everything you can make a nuke out of except U235 . This is because besides fizzling, unstable nuclei undergo this highly asymmetric spontaneous fission known as alpha decay.
Matter usually ends up as a fusion powered, flaming hell. (If you look really closely it is not all like that; there are scattered little lumps in orbit, such as the Earth and Mars)
Second, a world view with a free parameter, adjusted to explain away vulcanism.
Before the discovery of radio-activity, the source of the Earth’s internal heat was a puzzle. Kelvin had calculated that the heat from Earth’s gravitational collapse, from dispersed matter to planet, was no where near enough to keep the Earth’s internal fires going for the timescales which geologists were arguing for.
Enter radioactivity. But nobody actually knows the internal composition the Earth. The amount of radioactive material is a free parameter. You know how much heat you need and you infer the amount of Thorium and Uranium that “must” be there. If there is extra heat due to chain reactions you just revise the estimate downwards to suit.
Sticking to the theme of being less wrong, how does one see the elephant in the room? How does one avoid missing the existence of spontaneous nuclear fusion on a sunny day? Pass.
The vulcanism point is more promising. The structure of the error is to say that vulcanism does not count against the premise “ordinary matter is stable” because we’ve got vulcanism fully explained. We’ve worked out how much Uranium and Thorium there needs to be to explain it and we’ve bored holes 1000km deep and checked and found the correct amount. But wait! We haven’t done the bore-hole thing, and it is hard to remember this because it is so hopelessly impractical that we are not looking forward to doing it. In this case we assume that we have dotted the i’s and crossed the t’s on the existing theory when we haven’t.
One technique for avoiding “clever arguments” is to keep track of which things have been cross-checked and which things have only a single chain of inference and could probably be adjusted to fit with a new phenomenon. For example, there was a long time in astronomy when estimates of the distances to galaxies used Cepheid variables as a standard candle, and that was the only way of putting an absolute number on the distance. So there was room for a radical new theory that changed the size of the universe a lot, provided it mucked about with nuclear physics, putting the period/luminosity relationship into doubt (hmm, maybe not, I think it is an empirical relationship based on using parallax to get measured values from galactic Cepheid variables). Anyway along come type Ia supernovas as a second standard candle, and inter galactic distances are calculated two ways and are on a much firmer footing.
So there are things you know that you only know via one route and there is an implicit assumption that there is nothing extra that you don’t know about. Things that you only know via a single route can be useless for ruling out surprising new things .
And there are things you know that you know via two routes that pretty much agree. (if they disagree then you already know that there is something you don’t know). Things you know via two routes do have some power of ruling out surprising new things. The new thing has to sneak in between the error bars on the existing agreement or somehow produce a coordinated change to preserve the agreement or correctly fill the gap opened up by changing one thing and not the other.
I thought they did know that if the sun was solely dependent on chemical reactions, then it would have burned itself out more quickly than the age of the earth suggested.
I was glibly assuming that Fermi would know that the sun was nuclear powered. So he would already have one example of a large scale nuclear reaction to hand. Hans Bethe won his Nobel prize for discovering this. Checking dates, This obituary dates the discovery to 1938. So the timing is a little tight.
As you say, they knew that the sun wasn’t powered by chemical fires, they wouldn’t burn of long enough, but perhaps I’m expecting Fermi to have assimilated new physics quicker than is humanly possible.
Major nitpick: stars are examples of sustained nuclear fusion, not fission. The two are sustained by completely different mechanisms, so observation of nuclear fusion in stars doesn’t really tell us anything about the possibility of sustained nuclear fission.
Minor nitpick: it’s spelled volcanism, not vulcanism.
I’m looking at the outside view argument: matter is stable so we don’t expect to get anything nuclear.
But we look at the sun and see a power source with light atoms fusing to make medium weight ones. We already know about the radioactive decay of heavy atoms, and the interesting new twist is the fission of heavy atoms resulting in medium weight atoms and lots of energy. We know that it is medium weight atoms that are most stable, there is surplus energy to be had both from light atoms and heavy atoms. Can we actually do it with heavy atoms? It works elsewhere with light atoms, but that’s different. We basically know that it is up for grabs and it is time to go to the laboratory and find out.
I fear that I have outed myself with my tragic spelling error. People will be able to guess that I’m a fan of Mr Spock from the planet Vulcan ;-(
At least nine times out of ten in the history of physics, that heuristic probably did work. I agree that Fermi was wrong not to track down a perceived moderately small chance of a consequential breakthrough, but I can’t believe with any confidence that his initial estimate was too low without the power of hindsight.
Is there a good example of a conspiracy including physicists of the same prior fame as Rabi and Fermi (Szilard was then mostly an unknown) which was pursuing a ‘remote possibility’, of similar impact to nuclear weapons, that didn’t pan out? Obviously we would have a much lower chance of hearing about it especially on a cursory reading of history books, but the chance is not zero, there are allegedly many such occasions, and the absence of any such known cases is not insignificant evidence. Bolded to help broadcast the question to random readers, in case somebody who knows of an example runs across this comment a year later. The only thing I can think of offhand in possibly arguably the same reference class would be polywell fusion today, assuming it doesn’t pan out. There’s no known conspiracy there, but there’s a high-impact argument and Bussard previously working on the polywell.
Is there a good example of a conspiracy including physicists of the same prior fame as Rabi and Fermi (Szilard was then mostly an unknown) which was pursuing a ‘remote possibility’, of similar impact to nuclear weapons, that didn’t pan out?
Do you have a set of examples where it did pan out, or are we just talking about a description crafted to describe a particular event?
Restricting to physicists cuts us from talking about other areas like bioweapons research, where indeed most of the “remote possibilities” of apocalyptic destruction don’t pan out. Computer scientists did not produce AI in the 20th century, and it was thought of as at least a remote possibility.
For physicists, effective nuclear missile defense using beam weapons and interceptors did not pan out.
Radioactivity was discovered via “fluorescence is responsible for x-rays” idea that did not pan out...
There’s a big number of fusion related attempts that did not pan out at all, there’s fission of lithium which can’t be used for a chain reaction and is only used for making tritium. There’s hafnium triggering which might or might not pan out (and all the other isomers), and so on.
For the most part chasing or not chasing “wouldn’t it be neat if” scenarios doesn’t have much of effect on science, it seems—Fermi would still inevitably have discovered secondary neutrons even if he wasn’t pursuing chain reaction (provided someone else didn’t do that before him).
They were not hell bent on obtaining grant money for a fission bomb no-matter-what. The first thing they had to do was to measure fission cross sections over the neutron spectra, and in the counter-factual world where U235 does not exist but they detected fission anyway (because high energy neutrons do fission U238), they did the founding effort for the accelerator driven fission, the fission products of which heal the cancer around the world (the radiation sources in medicine would still be produced somehow), and in that world maybe you go on using it in some other sequence going on how Szilard was wrong and Fermi dramatically overestimated and how obviously the chance was far lower because they were talking of one isotope and not a single isotope works and how stupid it is to think that fissioning and producing neutrons is enough for chain reaction (the bar on that is tad higher) etc etc. In that alternate world, today, maybe there’s even an enormous project of trying to produce—in an accelerator or something more clever—enough plutonium to kick-start breeder reactor economy. Or maybe we got fusion power plants there, because a lot of effort was put into that (plus Manhattan project never happened and some scientists perhaps didn’t get cancer) . edit: Or actually, combination of the two could have happened at some point much later than 1945: sub-unity tokamak which produces neutrons via fusion, to irradiate uranium-238 and breed enough plutonium to kick start breeder reactors. Or maybe not, because it could have took a long while there until someone measures properties of plutonium. Either way, Fermi and Szilard end up looking awesome.
How about the original Pascal’s wager? It was made by a famed mathematician rather than a famed physicist, and it wasn’t a conspiracy, but it’s definitely in the same reference frame.
Because they didn’t know if fission produced enough prompt neutrons, which is clear from the quoted passage, and probably also because Fermi has estimated that there’s on the order of 10 other propositions about the results from fission which he, if presented with them by an equally enthusiastic proponent, would find comparably plausible. I’m thinking that in the alternate realities where fission does something other than producing sufficient number of neutrons (about 3 on the average), you’d assign likewise high number to them by hindsight, with a sum greater than 1 (so stop calling it probability already).
They had not been demonstrated experimentally, to be sure; but they were still the default projection from what was already known.
What I am guessing happened (you’re welcome to research the topic), first you can learn that uranium can be fissioned by neutrons (which you make, if I recall correctly, by irradiating lithium with alpha particles). Then, you may learn that fission produces neutrons, because, it so happens that you don’t just see all of that in microscope, you see particle tracks in photographic emulsion or a cloud chamber or the like, and neutrons, being neutral, are hard to detect. (edit: And this is how I read the quote, anyway, on the first reading. I just parse it as low probability of neutrons, high probability of chain reaction if there’s enough neutrons.)
So at first you do not know if fission produces neutrons without very precise and difficult analysis of the conservation of momentum or a big enough experiment to actually be able to count them, or something likewise clever and subtle. To think about it, chronologically, you may happen to first acquire weak evidence that fission does not produce prompt neutrons, by detecting beta decay from the fission products, which implies that they still have too many neutrons for their atomic number. And perhaps by detecting recoil from the delayed neutrons (which are too few for chain reaction, and are too delayed for a bomb).
Why didn’t it work on Reality?
Or did it? It’s bit like arguing how dumb it was to predict 1⁄6 probability for the die rolling 1, when it in fact rolled 1. Given 6 other sides and lack of information to prefer one over the other, the probability is 1⁄6 (edit: or less, of course) . The relevant reality here is the available knowledge and the mechanism that assigns plausibilities, and the first step of “working” is probabilities (somehow related to plausibilities) summing to 1. You ought to be able to test a mind upload’s priors—just run them in parallel a very large number of times, having them opinion about probabilities of various topics, and see to what mutually exclusive scenarios sum, or if the sum even converges. Ghmm.
Ohh, the elephant in the room that I somehow neglected to mention. (It is hard to argue against silly ideas, I suspect for a reason similar to why it is very hard / impossible to truly reflect on how you visually tell apart cats and dogs)
There’s a lot of nuclei that can fission, but can’t sustain a chain reaction! Because they do not produce high enough energy neutrons, or they capture neutrons too often and fission too rarely, and so on. And the neutron source of the time (radium plus lithium, or radium plus beryllium, or something like that), it produced a lot of high energy neutrons.
It would be quite interesting if someone far more obsessive compulsive than me would go over the table of isotopes and see if about 1 in 10 isotopes that can fission when irradiated with radium-lithium or radium-beryllium neutron source produce enough neutrons of high enough energy. Because if it is close to 1 in 10, and I think it is (on appropriate, i.e. logarithmic, scale), then the evidence that one isotope can fission, will only get you to 1 in 10 chance that it makes neutrons that can fission it.
Szilard was proposing the idea of fission chain reactions in general. Of course he would be less confident if asked about a specific isotope, but he’s still right that the idea is important if he gets the isotope wrong. Anyway, the fact that he discusses uranium specifically shows that the evidence available to him points toward uranium and that this sort of reference class is not using all the evidence that they had at the time.
and that this sort of reference class is not using all the evidence that they had at the time.
You’re making it sound like you have a half of the periodic table on the table. You don’t. There’s U-238, U-235, Th-232, and that’s it . Forget plutonium, you won’t be making any significant amount of that in 1945 without a nuclear reactor. Of them the evidence for fission would be coming, actually, from U238 fissioning by fast neutrons, and U238 can’t sustain chain reaction because too many of the the neutrons slow down before they fission anything, and slow neutrons get captured rather than cause fission.
U235 is the only naturally abundant fissile isotope, and it has a half life of 700 million years, which is 4400 times longer than the half life of the second most stable fissile isotope (U-233) and 30 000 longer than that of the third most stable isotope (that’s it. The factor of 4400 difference, then the factor of less than 7 , and so on). That’s how much U235 is a fluke. One can legitimately wonder if our universe is fine tuned for U235 to be so stable.
edit: note, confusing terminology here: “fissile” means capable of supporting a chain reaction, not merely those capable of fissioning when whacked with a high energy neutron.
edit2: and note that the nucleus must be able to capture a slow neutron and then fission due to capturing it, not due to being whammed by it’s kinetic energy, contrary to what you might have been imagining, because neutrons lose kinetic energy rather quickly, before sufficient chance at causing a fission. It must be very unstable, yet, it must be very stable.
If you view the 90% number as an upper bound, with a few bits’ worth of error bars, it doesn’t look like such a strong claim. If Szilard and Fermi both agreed that the probability of the bad scenario was 10% or more, then it may well have been dumb luck that Szilard’s estimate was higher. Most of the epistemic work would have been in promoting the hypothesis to the 10% “attention level” in the first place.
(Of course, maybe Fermi didn’t actually do that work himself, in which case it might be argued that this doesn’t really apply; but even if he was anchoring on the fact that others brought it to his attention, that was still the right move.)
Why assign a 90% probability to chain reactions being impossible or unfeasible? How should Fermi have known that, especially when it was false?
EDIT: Be careful with your arguments that Fermi should have assigned the false fact ‘chain reactions are impossible’ an even more extreme probability than 90%. You are training your brain to assign higher and more extreme probabilities to things that are false. You should be looking for potential heuristics that should have fired in the opposite direction. There’s such a thing as overfitting, but there’s also such a thing as being cleverly contrarian about reasons why nobody could possibly have figured out X and thus training your brain in the opposite direction of each example.
Because ordinary matter is stable, and the Earth (and, for more anthropically stable evidence, the other planets) hadn’t gone up in a nuclear chain reaction already?
Without using hindsight, one might presume that a universe in which nuclear chain reactions were possible would be one in which it happened to ordinary matter under normal conditions, or else only to totally unstable elements, not one in which it barely worked in highly concentrated forms of particular not-very-radioactive isotopes. This also explains his presumption that even if it worked, it would be highly impractical: given the orders of magnitude of uncertainty, it seemed like “chain reactions don’t naturally occur but they’re possible to engineer on practical scales” is represented by a narrow band of the possible parameters.
I admit that I don’t know what evidence Fermi did and didn’t have at the time, but I’d be surprised if Szilard’s conclusions were as straightforward an implication of current knowledge as nanotech seems to be of today’s current knowledge.
Strictly speaking, chain reactions do naturally occur, they’re just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn’t have that evidence available.
Also, although I like your argument… wouldn’t it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn’t burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?
One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.
Consider also the nature of the first heap: Purified uranium and a graphite moderator in such large quantities that the neutron multiplication factor was driven just over one. Elements which were less stable than uranium decayed earlier in Earth’s history; elements more stable than this would not be suitable for fission. But the heap produced plutonium by its internal reactions, which could be purified chemically and then fizzed. All this was a difficult condition to obtain, but predictable that human intelligence would seek out such points in possibility-space selectively and create them—that humans would create exotic intermediate conditions not existing in nature, by which the remaining sorts of materials would fizz for the first time, and that such conditions indeed might be expected to exist, because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention, with a wide space of possibilities for which elements you could try. Or to then simplify this conclusion: “Of course it wouldn’t exist in nature! Those bombs went off a long time ago, we’ll have to build a slightly different sort! We’re not restricted to bombs that grow on trees.” By such reasoning, if you had attended to it, you might have correctly agreed with Szilard, and been correctly skeptical of Fermi’s hypothetical counterargument.
Not taking into account that engineering intelligence will be applied to overcome the first hypothetical difficulty is, indeed, a source of systematic directional pessimistic bias in long-term technological forecasts. Though in this case it was only a decade. I think if Fermi had said that things were 30 years off and Szilard had said 10, I would’ve been a tad more sympathetic toward Fermi because of the obvious larger reference class—though I would still be trying not to update my brain in the opposite direction from the training example.
Except there aren’t any that are not eliminated by, say, 10 billion years. And even 40 million years eliminate everything you can make a nuke out of except U235 . This is because besides fizzling, unstable nuclei undergo this highly asymmetric spontaneous fission known as alpha decay.
Good counter-analogy, and awesome Wikipedia article. Thanks!
A clever argument! Why didn’t it work on Reality?
I spot two holes.
First the elephant in the living room: The sun.
Matter usually ends up as a fusion powered, flaming hell. (If you look really closely it is not all like that; there are scattered little lumps in orbit, such as the Earth and Mars)
Second, a world view with a free parameter, adjusted to explain away vulcanism.
Before the discovery of radio-activity, the source of the Earth’s internal heat was a puzzle. Kelvin had calculated that the heat from Earth’s gravitational collapse, from dispersed matter to planet, was no where near enough to keep the Earth’s internal fires going for the timescales which geologists were arguing for.
Enter radioactivity. But nobody actually knows the internal composition the Earth. The amount of radioactive material is a free parameter. You know how much heat you need and you infer the amount of Thorium and Uranium that “must” be there. If there is extra heat due to chain reactions you just revise the estimate downwards to suit.
Sticking to the theme of being less wrong, how does one see the elephant in the room? How does one avoid missing the existence of spontaneous nuclear fusion on a sunny day? Pass.
The vulcanism point is more promising. The structure of the error is to say that vulcanism does not count against the premise “ordinary matter is stable” because we’ve got vulcanism fully explained. We’ve worked out how much Uranium and Thorium there needs to be to explain it and we’ve bored holes 1000km deep and checked and found the correct amount. But wait! We haven’t done the bore-hole thing, and it is hard to remember this because it is so hopelessly impractical that we are not looking forward to doing it. In this case we assume that we have dotted the i’s and crossed the t’s on the existing theory when we haven’t.
One technique for avoiding “clever arguments” is to keep track of which things have been cross-checked and which things have only a single chain of inference and could probably be adjusted to fit with a new phenomenon. For example, there was a long time in astronomy when estimates of the distances to galaxies used Cepheid variables as a standard candle, and that was the only way of putting an absolute number on the distance. So there was room for a radical new theory that changed the size of the universe a lot, provided it mucked about with nuclear physics, putting the period/luminosity relationship into doubt (hmm, maybe not, I think it is an empirical relationship based on using parallax to get measured values from galactic Cepheid variables). Anyway along come type Ia supernovas as a second standard candle, and inter galactic distances are calculated two ways and are on a much firmer footing.
So there are things you know that you only know via one route and there is an implicit assumption that there is nothing extra that you don’t know about. Things that you only know via a single route can be useless for ruling out surprising new things .
And there are things you know that you know via two routes that pretty much agree. (if they disagree then you already know that there is something you don’t know). Things you know via two routes do have some power of ruling out surprising new things. The new thing has to sneak in between the error bars on the existing agreement or somehow produce a coordinated change to preserve the agreement or correctly fill the gap opened up by changing one thing and not the other.
I thought they did know that if the sun was solely dependent on chemical reactions, then it would have burned itself out more quickly than the age of the earth suggested.
I was glibly assuming that Fermi would know that the sun was nuclear powered. So he would already have one example of a large scale nuclear reaction to hand. Hans Bethe won his Nobel prize for discovering this. Checking dates, This obituary dates the discovery to 1938. So the timing is a little tight.
As you say, they knew that the sun wasn’t powered by chemical fires, they wouldn’t burn of long enough, but perhaps I’m expecting Fermi to have assimilated new physics quicker than is humanly possible.
Major nitpick: stars are examples of sustained nuclear fusion, not fission. The two are sustained by completely different mechanisms, so observation of nuclear fusion in stars doesn’t really tell us anything about the possibility of sustained nuclear fission.
Minor nitpick: it’s spelled volcanism, not vulcanism.
I’m looking at the outside view argument: matter is stable so we don’t expect to get anything nuclear.
But we look at the sun and see a power source with light atoms fusing to make medium weight ones. We already know about the radioactive decay of heavy atoms, and the interesting new twist is the fission of heavy atoms resulting in medium weight atoms and lots of energy. We know that it is medium weight atoms that are most stable, there is surplus energy to be had both from light atoms and heavy atoms. Can we actually do it with heavy atoms? It works elsewhere with light atoms, but that’s different. We basically know that it is up for grabs and it is time to go to the laboratory and find out.
I fear that I have outed myself with my tragic spelling error. People will be able to guess that I’m a fan of Mr Spock from the planet Vulcan ;-(
Quoted for irony.
I’m not sure if pointing out my typo was your intent there, but you caused me to notice it, so I fixed it.
At least nine times out of ten in the history of physics, that heuristic probably did work. I agree that Fermi was wrong not to track down a perceived moderately small chance of a consequential breakthrough, but I can’t believe with any confidence that his initial estimate was too low without the power of hindsight.
Is there a good example of a conspiracy including physicists of the same prior fame as Rabi and Fermi (Szilard was then mostly an unknown) which was pursuing a ‘remote possibility’, of similar impact to nuclear weapons, that didn’t pan out? Obviously we would have a much lower chance of hearing about it especially on a cursory reading of history books, but the chance is not zero, there are allegedly many such occasions, and the absence of any such known cases is not insignificant evidence. Bolded to help broadcast the question to random readers, in case somebody who knows of an example runs across this comment a year later. The only thing I can think of offhand in possibly arguably the same reference class would be polywell fusion today, assuming it doesn’t pan out. There’s no known conspiracy there, but there’s a high-impact argument and Bussard previously working on the polywell.
Do you have a set of examples where it did pan out, or are we just talking about a description crafted to describe a particular event?
Restricting to physicists cuts us from talking about other areas like bioweapons research, where indeed most of the “remote possibilities” of apocalyptic destruction don’t pan out. Computer scientists did not produce AI in the 20th century, and it was thought of as at least a remote possibility.
For physicists, effective nuclear missile defense using beam weapons and interceptors did not pan out.
Radioactivity was discovered via “fluorescence is responsible for x-rays” idea that did not pan out...
There’s a big number of fusion related attempts that did not pan out at all, there’s fission of lithium which can’t be used for a chain reaction and is only used for making tritium. There’s hafnium triggering which might or might not pan out (and all the other isomers), and so on.
For the most part chasing or not chasing “wouldn’t it be neat if” scenarios doesn’t have much of effect on science, it seems—Fermi would still inevitably have discovered secondary neutrons even if he wasn’t pursuing chain reaction (provided someone else didn’t do that before him).
They were not hell bent on obtaining grant money for a fission bomb no-matter-what. The first thing they had to do was to measure fission cross sections over the neutron spectra, and in the counter-factual world where U235 does not exist but they detected fission anyway (because high energy neutrons do fission U238), they did the founding effort for the accelerator driven fission, the fission products of which heal the cancer around the world (the radiation sources in medicine would still be produced somehow), and in that world maybe you go on using it in some other sequence going on how Szilard was wrong and Fermi dramatically overestimated and how obviously the chance was far lower because they were talking of one isotope and not a single isotope works and how stupid it is to think that fissioning and producing neutrons is enough for chain reaction (the bar on that is tad higher) etc etc. In that alternate world, today, maybe there’s even an enormous project of trying to produce—in an accelerator or something more clever—enough plutonium to kick-start breeder reactor economy. Or maybe we got fusion power plants there, because a lot of effort was put into that (plus Manhattan project never happened and some scientists perhaps didn’t get cancer) . edit: Or actually, combination of the two could have happened at some point much later than 1945: sub-unity tokamak which produces neutrons via fusion, to irradiate uranium-238 and breed enough plutonium to kick start breeder reactors. Or maybe not, because it could have took a long while there until someone measures properties of plutonium. Either way, Fermi and Szilard end up looking awesome.
How about the original Pascal’s wager? It was made by a famed mathematician rather than a famed physicist, and it wasn’t a conspiracy, but it’s definitely in the same reference frame.
Because they didn’t know if fission produced enough prompt neutrons, which is clear from the quoted passage, and probably also because Fermi has estimated that there’s on the order of 10 other propositions about the results from fission which he, if presented with them by an equally enthusiastic proponent, would find comparably plausible. I’m thinking that in the alternate realities where fission does something other than producing sufficient number of neutrons (about 3 on the average), you’d assign likewise high number to them by hindsight, with a sum greater than 1 (so stop calling it probability already).
A clever argument! Why didn’t it work on Reality?
I’m correcting a potential factual error:
What I am guessing happened (you’re welcome to research the topic), first you can learn that uranium can be fissioned by neutrons (which you make, if I recall correctly, by irradiating lithium with alpha particles). Then, you may learn that fission produces neutrons, because, it so happens that you don’t just see all of that in microscope, you see particle tracks in photographic emulsion or a cloud chamber or the like, and neutrons, being neutral, are hard to detect. (edit: And this is how I read the quote, anyway, on the first reading. I just parse it as low probability of neutrons, high probability of chain reaction if there’s enough neutrons.)
So at first you do not know if fission produces neutrons without very precise and difficult analysis of the conservation of momentum or a big enough experiment to actually be able to count them, or something likewise clever and subtle. To think about it, chronologically, you may happen to first acquire weak evidence that fission does not produce prompt neutrons, by detecting beta decay from the fission products, which implies that they still have too many neutrons for their atomic number. And perhaps by detecting recoil from the delayed neutrons (which are too few for chain reaction, and are too delayed for a bomb).
Or did it? It’s bit like arguing how dumb it was to predict 1⁄6 probability for the die rolling 1, when it in fact rolled 1. Given 6 other sides and lack of information to prefer one over the other, the probability is 1⁄6 (edit: or less, of course) . The relevant reality here is the available knowledge and the mechanism that assigns plausibilities, and the first step of “working” is probabilities (somehow related to plausibilities) summing to 1. You ought to be able to test a mind upload’s priors—just run them in parallel a very large number of times, having them opinion about probabilities of various topics, and see to what mutually exclusive scenarios sum, or if the sum even converges. Ghmm.
Ohh, the elephant in the room that I somehow neglected to mention. (It is hard to argue against silly ideas, I suspect for a reason similar to why it is very hard / impossible to truly reflect on how you visually tell apart cats and dogs)
There’s a lot of nuclei that can fission, but can’t sustain a chain reaction! Because they do not produce high enough energy neutrons, or they capture neutrons too often and fission too rarely, and so on. And the neutron source of the time (radium plus lithium, or radium plus beryllium, or something like that), it produced a lot of high energy neutrons.
It would be quite interesting if someone far more obsessive compulsive than me would go over the table of isotopes and see if about 1 in 10 isotopes that can fission when irradiated with radium-lithium or radium-beryllium neutron source produce enough neutrons of high enough energy. Because if it is close to 1 in 10, and I think it is (on appropriate, i.e. logarithmic, scale), then the evidence that one isotope can fission, will only get you to 1 in 10 chance that it makes neutrons that can fission it.
Szilard was proposing the idea of fission chain reactions in general. Of course he would be less confident if asked about a specific isotope, but he’s still right that the idea is important if he gets the isotope wrong. Anyway, the fact that he discusses uranium specifically shows that the evidence available to him points toward uranium and that this sort of reference class is not using all the evidence that they had at the time.
You’re making it sound like you have a half of the periodic table on the table. You don’t. There’s U-238, U-235, Th-232, and that’s it . Forget plutonium, you won’t be making any significant amount of that in 1945 without a nuclear reactor. Of them the evidence for fission would be coming, actually, from U238 fissioning by fast neutrons, and U238 can’t sustain chain reaction because too many of the the neutrons slow down before they fission anything, and slow neutrons get captured rather than cause fission.
U235 is the only naturally abundant fissile isotope, and it has a half life of 700 million years, which is 4400 times longer than the half life of the second most stable fissile isotope (U-233) and 30 000 longer than that of the third most stable isotope (that’s it. The factor of 4400 difference, then the factor of less than 7 , and so on). That’s how much U235 is a fluke. One can legitimately wonder if our universe is fine tuned for U235 to be so stable.
edit: note, confusing terminology here: “fissile” means capable of supporting a chain reaction, not merely those capable of fissioning when whacked with a high energy neutron.
edit2: and note that the nucleus must be able to capture a slow neutron and then fission due to capturing it, not due to being whammed by it’s kinetic energy, contrary to what you might have been imagining, because neutrons lose kinetic energy rather quickly, before sufficient chance at causing a fission. It must be very unstable, yet, it must be very stable.
If you view the 90% number as an upper bound, with a few bits’ worth of error bars, it doesn’t look like such a strong claim. If Szilard and Fermi both agreed that the probability of the bad scenario was 10% or more, then it may well have been dumb luck that Szilard’s estimate was higher. Most of the epistemic work would have been in promoting the hypothesis to the 10% “attention level” in the first place.
(Of course, maybe Fermi didn’t actually do that work himself, in which case it might be argued that this doesn’t really apply; but even if he was anchoring on the fact that others brought it to his attention, that was still the right move.)
I suppose if we postulate that Szilard and Rabi did better by correlated dumb luck, then we can avoid learning anything from this example, yes.