The less useful device would (probably) not have been (much) lower yield, it would have been much larger and heavier. For example, part of what led to the implosion device was the calculation for how long a gun-type plutonium weapon would need to be, which showed it would not fit on an aircraft. I agree that the scarcity of the materials is likely sufficient to limit the kind of iterated “let’s make sure we understand how this works in principle before we try to make something useful” process that normally goes into making new things (and that was part of “these constraints” you quote, though maybe I didn’t write it very clearly).
Edited to add:
Also, my phrasing “scarcity of materials” may be downplaying the extent to which scaling up uranium and plutonium production was part of the technological progress necessary for making a nuclear weapon. But I sometimes see people attribute the impressive and scary suddenness of deployable nuclear weapons entirely to the physics of energy release from a supercritical mass, and I think this is a mistake.
But I sometimes see people attribute the impressive and scary suddenness of deployable nuclear weapons entirely to the physics of energy release from a supercritical mass, and I think this is a mistake.
I disagree. I think it is a mistake to shoehorn “patterns” onto the history of technological progress where you deliberately pick the time window and the metric and ignore timescale in order to fit a narrative.
I don’t know what motivates people to try to dissolve historical discontinuities such as the advent of the nuclear bomb, but they did manage to find a metric along which early nuclear bombs were comparable to conventional bombs, namely explosive yield per dollar. But the real importance of the atom bomb is that it’s possible at all; that physics allows it; that it is about a million times more energy-dense than chemical explosives—not a hundred, not a trillion; a million. That is what determined the post-ww2 strategic landscape and the predicament humanity is currently in. That which is determined by the laws of nature and not the dynamics of human societies.
You can’t get that information out of drawing lines through the rate of improvement of explosive yields or whatever. You wouldn’t even have thought of drawing that particular line. This whole exercise is mistaking hindsight for wisdom. The only lesson to learn from history is to not learn lessons from it, especially when something as freaky and unprecedented as AGI is concerned.
The only lesson to learn from history is to not learn lessons from it, especially when something as freaky and unprecedented as AGI is concerned.
This seems like a pretty wild claim to me, even as someone who agrees that AGI is freaky and unprecedented, possibly to the point that we should expect it to depart drastically from past experience.
My issue here is with “past experience”. We don’t have past experience of developing AGI. If this was about secular cycles in agricultural societies where boundary conditions remain the same over millennia, I’d be much more sympathetic. But lack of past experience is inherent to new technologies. Inferring future technological progress from the past necessitates shaky analogies. You can see any pattern you want and deduce any conclusion you want from history, by cherry-picking the technology, the time-window and the metric. You say “Wright Brothers proved experts are Luddites”, I say ” Where is the flying car I’ve been promised”. There is no way to not cherry-pick. Zoom in far enough and any curve looks smooth, including a hard AI takeoff.
My point is don’t look at Wright Brothers, the Manhattan Project or Moore’s Law, look at streamlines, atomic mass spectra and the Landauer limit to infer where we’re headed. Even if the picture is incomplete it’s still more informative than vague analogies with the past.
Looking at atomic mass spectra of uranium and its fission products (and hence the difference in their energy potential) in the early 20th century would have helped you predict just how big a deal nuclear weapons will be, in a way that looking at the rate of improvement of conventional explosives would not have.
Little Boy was a gun-type device with hardly any moving parts; it was the “larger and heavier” and inefficient and impractical prototype and it still absolutely blew every conventional bomb out of the water. Also, this is reference class tennis. If the rules allow for changing the metric in the middle of the debate, I shoot back with “the first telegraph cable improved transatlantic communication latency more than ten-million-fold the instant it was turned on; how’s that for a discontinuity”.
Also, this is reference class tennis. If the rules allow for changing the metric in the middle of the debate, I shoot back with “the first telegraph cable improved transatlantic communication latency more than ten-million-fold the instant it was turned on; how’s that for a discontinuity”.
To be clear, I’m not saying “there’s this iron law about technology and you might thing nuclear weapons disprove it, but they don’t because <reasons>” (I’m not claiming there are any laws or hard rules about anything at all). What I’m saying is that there’s a thing that usually happens, but it didn’t happen with nuclear weapons, and I think we can see why. Nuclear weapons absolutely do live in the relevant reference class, and I think the way their development happened should make us more worried about AGI.
Little Boy was a gun-type device with hardly any moving parts; it was the “larger and heavier” and inefficient and impractical prototype and it still absolutely blew every conventional bomb out of the water.
It was, and this is a fair point. But Little Boy used like a billion dollars worth of HEU, which provided a very strong incentive not to approach the design process in the usual iterative way.
For contrast, the laser’s basic physics advantage over other light sources (in coherence length and intensity) is at least as big as the nuclear weapons advantage over conventional explosives, and the first useful laser was approximately as simple as Little Boy, but the first laser was still not very useful. My claim is that this is because the cost of iterating was very low and there was no need to make it useful on the first try.
We agree on the object level. On the meta level though, what’s so important about the very first laser? There is a lot of ambiguity in what counts as the starting point. For instance, when was the first steam engine invented? You could cite the Newcomen engine, or you could refer to various steam-powered contraptions from antiquity. The answer is going to differ by millennia, and along with it all the other parameters characterizing the development of this particular technology, but I don’t see what I’m supposed to learn from it.
Actually, I think the steam engine is the best example of Richard’s thesis, and I wish he had talked about it more. Before Newcomen, there was Denis Papin and Thomas Savery, who invented steam devices which were nearly useless and arguably not engines. Newcomen’s engine was the first commercially successful true engine, and even then it was arguably inferior to wind, water, or muscle power, only being practical in the very narrow case of pumping water out of coal mines. It wasn’t until the year 1800 (decades after the first engines) that they became useful enough for locomotion.
There was even an experience curve, noticed by Henry Adams, where each successive engine did more work and used less coal. Henry Adam’s curve was similar to Moore’s Law in many respects.
If the thesis is “There exists for every technological innovation in history some metric along which its performance is a smooth continuation of previous trends within some time window”, then yes, like I said, I agree at the object level, and my objection is at the meta level, namely that such an observation is worthless as there is basically no way to violate it. Disjunct over enough terms and the statement is bound to become true, but then it explains everything and therefore nothing.
Taking AGI as an example: Does slow takeoff fit the bill? Check. Scaling hypothesis implies AGI will become gradually more competent with more compute. Does hard takeoff fit the bill? Check. Recursive self-improvement implies there is a continuous chain of subagents bootstrapping from a seed AGI to superintelligence (even though it looks like Judgment Day from the outside).
If humanity survives a hard AI takeoff, I bet some future econblogger is going to draw a curve and say “Look! Each of these subagents is only a modest improvement over the last, there’s no discontinuity! Like every other technology, AI follows the same development pattern!”
The less useful device would (probably) not have been (much) lower yield, it would have been much larger and heavier. For example, part of what led to the implosion device was the calculation for how long a gun-type plutonium weapon would need to be, which showed it would not fit on an aircraft. I agree that the scarcity of the materials is likely sufficient to limit the kind of iterated “let’s make sure we understand how this works in principle before we try to make something useful” process that normally goes into making new things (and that was part of “these constraints” you quote, though maybe I didn’t write it very clearly).
Edited to add:
Also, my phrasing “scarcity of materials” may be downplaying the extent to which scaling up uranium and plutonium production was part of the technological progress necessary for making a nuclear weapon. But I sometimes see people attribute the impressive and scary suddenness of deployable nuclear weapons entirely to the physics of energy release from a supercritical mass, and I think this is a mistake.
I disagree. I think it is a mistake to shoehorn “patterns” onto the history of technological progress where you deliberately pick the time window and the metric and ignore timescale in order to fit a narrative.
I don’t know what motivates people to try to dissolve historical discontinuities such as the advent of the nuclear bomb, but they did manage to find a metric along which early nuclear bombs were comparable to conventional bombs, namely explosive yield per dollar. But the real importance of the atom bomb is that it’s possible at all; that physics allows it; that it is about a million times more energy-dense than chemical explosives—not a hundred, not a trillion; a million. That is what determined the post-ww2 strategic landscape and the predicament humanity is currently in. That which is determined by the laws of nature and not the dynamics of human societies.
You can’t get that information out of drawing lines through the rate of improvement of explosive yields or whatever. You wouldn’t even have thought of drawing that particular line. This whole exercise is mistaking hindsight for wisdom. The only lesson to learn from history is to not learn lessons from it, especially when something as freaky and unprecedented as AGI is concerned.
This seems like a pretty wild claim to me, even as someone who agrees that AGI is freaky and unprecedented, possibly to the point that we should expect it to depart drastically from past experience.
My issue here is with “past experience”. We don’t have past experience of developing AGI. If this was about secular cycles in agricultural societies where boundary conditions remain the same over millennia, I’d be much more sympathetic. But lack of past experience is inherent to new technologies. Inferring future technological progress from the past necessitates shaky analogies. You can see any pattern you want and deduce any conclusion you want from history, by cherry-picking the technology, the time-window and the metric. You say “Wright Brothers proved experts are Luddites”, I say ” Where is the flying car I’ve been promised”. There is no way to not cherry-pick. Zoom in far enough and any curve looks smooth, including a hard AI takeoff.
My point is don’t look at Wright Brothers, the Manhattan Project or Moore’s Law, look at streamlines, atomic mass spectra and the Landauer limit to infer where we’re headed. Even if the picture is incomplete it’s still more informative than vague analogies with the past.
What does streamlines refer to in this context? And what is the relevance of atomic mass spectra?
Looking at atomic mass spectra of uranium and its fission products (and hence the difference in their energy potential) in the early 20th century would have helped you predict just how big a deal nuclear weapons will be, in a way that looking at the rate of improvement of conventional explosives would not have.
Little Boy was a gun-type device with hardly any moving parts; it was the “larger and heavier” and inefficient and impractical prototype and it still absolutely blew every conventional bomb out of the water. Also, this is reference class tennis. If the rules allow for changing the metric in the middle of the debate, I shoot back with “the first telegraph cable improved transatlantic communication latency more than ten-million-fold the instant it was turned on; how’s that for a discontinuity”.
To be clear, I’m not saying “there’s this iron law about technology and you might thing nuclear weapons disprove it, but they don’t because <reasons>” (I’m not claiming there are any laws or hard rules about anything at all). What I’m saying is that there’s a thing that usually happens, but it didn’t happen with nuclear weapons, and I think we can see why. Nuclear weapons absolutely do live in the relevant reference class, and I think the way their development happened should make us more worried about AGI.
It was, and this is a fair point. But Little Boy used like a billion dollars worth of HEU, which provided a very strong incentive not to approach the design process in the usual iterative way.
For contrast, the laser’s basic physics advantage over other light sources (in coherence length and intensity) is at least as big as the nuclear weapons advantage over conventional explosives, and the first useful laser was approximately as simple as Little Boy, but the first laser was still not very useful. My claim is that this is because the cost of iterating was very low and there was no need to make it useful on the first try.
We agree on the object level. On the meta level though, what’s so important about the very first laser? There is a lot of ambiguity in what counts as the starting point. For instance, when was the first steam engine invented? You could cite the Newcomen engine, or you could refer to various steam-powered contraptions from antiquity. The answer is going to differ by millennia, and along with it all the other parameters characterizing the development of this particular technology, but I don’t see what I’m supposed to learn from it.
Actually, I think the steam engine is the best example of Richard’s thesis, and I wish he had talked about it more. Before Newcomen, there was Denis Papin and Thomas Savery, who invented steam devices which were nearly useless and arguably not engines. Newcomen’s engine was the first commercially successful true engine, and even then it was arguably inferior to wind, water, or muscle power, only being practical in the very narrow case of pumping water out of coal mines. It wasn’t until the year 1800 (decades after the first engines) that they became useful enough for locomotion.
There was even an experience curve, noticed by Henry Adams, where each successive engine did more work and used less coal. Henry Adam’s curve was similar to Moore’s Law in many respects.
If the thesis is “There exists for every technological innovation in history some metric along which its performance is a smooth continuation of previous trends within some time window”, then yes, like I said, I agree at the object level, and my objection is at the meta level, namely that such an observation is worthless as there is basically no way to violate it. Disjunct over enough terms and the statement is bound to become true, but then it explains everything and therefore nothing.
Taking AGI as an example: Does slow takeoff fit the bill? Check. Scaling hypothesis implies AGI will become gradually more competent with more compute. Does hard takeoff fit the bill? Check. Recursive self-improvement implies there is a continuous chain of subagents bootstrapping from a seed AGI to superintelligence (even though it looks like Judgment Day from the outside).
If humanity survives a hard AI takeoff, I bet some future econblogger is going to draw a curve and say “Look! Each of these subagents is only a modest improvement over the last, there’s no discontinuity! Like every other technology, AI follows the same development pattern!”