I agree that this is plausibly a real important difference, but I do not think that it is obvious.
The most recent augmentative technological change was the industrial revolution. It has reshaped virtually every every activity. It allowed for the majority of the population to not work in agriculture for the first time since the agricultural revolution.
The industrial revolution centered on energy. Having much cheaper, much more abundant energy allowed humans to use that energy for all sorts of things.
If fusion ends up being similar in cost to existing electricity production, it will be a substitutional technology. This is the thing that we are working on now (well, also making it work at all). People who work in fusion focus on this because it is the reasonable near/medium term projection. If fusion ends up being substantially cheaper, it will be an augmentative technology. It is not at all clear that this will happen, because we can’t know how the costs will change between the first and thousandth fusion power plant.
Notably, we don’t know if foom is going to be a thing either.
The narrative around the technology is at least as important as what has happened in the technology itself. The fusion community could frequently talk about how incredible the industrial revolution was, and how it powered Britain to global dominance for two centuries. A new source of energy might do the same thing ! But this is more hype than we feel we ought to offer, and the community’s goal is not to create a dominant superpower.
Even if foom is going to happen, things would look very different if the leaders credibly committed to helping others foom if they are first. I don’t know if this would be better or worse from a existential risk perspective, but it would change the nature of the race a lot.
we can’t know how the costs will change between the first and thousandth fusion power plant.
Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and technological transformation by cheaper and more abundant electricity. The scale for that transformation is measured in decades, and there are humans designing and permitting and building and operating each and every one, on human timescales. There’s no winner take all dynamic even if your leading competitor builds their first commercial plant five years before you do.
Also: We do have other credible paths that can also greatly increase access to comparably low-cost dispatchable clean power on a similar timescale of development, if we don’t get fusion.
we don’t know if foom is going to be a thing
Also true, which means the default assumption without it is that the scaling behavior looks like the scaling behavior for other successful software innovations. In software, the development costs are high and then the unit costs in deployment quickly fall to near zero. As long as AI benefits from collecting user data to improve training (which should still be true in many non-foom scenarios) then we might expect network effect scaling behavior where the first to really capture a market niche becomes almost uncatchable, like Meta and Google and Amazon. Or where downstream app layers are built on software functionality, switching costs become very high and you get a substantial amount of lock-in, like with Apple and Microsoft.
Even if foom is going to happen, things would look very different if the leaders credibly committed to helping others foom if they are first. I don’t know if this would be better or worse from a existential risk perspective, but it would change the nature of the race a lot.
Agreed. But, if any of the leading labs could credibly state what kinds of things they would or wouldn’t be able to do in a foom scenario, let alone credibly precommit to what they would actually do, I would feel a whole lot better and safer about the possibility. Instead the leaders can’t even precommit credibly to their own stated policies, in the absence of foom, and also don’t have anywhere near a credible plan for managing foom if it happens.
Jeffrey, I appreciate your points about fusion’s potential, and the uncertainty around “foom.” However, I think framing this in terms of bottlenecks clarifies the core difference. The Industrial Revolution was transformative because it overcame the energy bottleneck. Today, while clean energy is vital, many transformative advancements are primarily bottlenecked by intelligence, not energy. Fusion addresses an important, existing constraint, but it’s a step removed from the frontier of capability. AI, particularly AGI, directly targets that intelligence bottleneck, potentially unlocking progress across virtually every domain limited by human cognitive capacity. This difference in which bottleneck is addressed makes the potential transformative impact, and thus the strategic landscape, fundamentally distinct. Even drastic cost reductions in energy don’t address the core limiting factor for progress in areas fundamentally constrained by our cognitive and analytical abilities.
I agree that this is plausibly a real important difference, but I do not think that it is obvious.
The most recent augmentative technological change was the industrial revolution. It has reshaped virtually every every activity. It allowed for the majority of the population to not work in agriculture for the first time since the agricultural revolution.
The industrial revolution centered on energy. Having much cheaper, much more abundant energy allowed humans to use that energy for all sorts of things.
If fusion ends up being similar in cost to existing electricity production, it will be a substitutional technology. This is the thing that we are working on now (well, also making it work at all). People who work in fusion focus on this because it is the reasonable near/medium term projection. If fusion ends up being substantially cheaper, it will be an augmentative technology. It is not at all clear that this will happen, because we can’t know how the costs will change between the first and thousandth fusion power plant.
Notably, we don’t know if foom is going to be a thing either.
The narrative around the technology is at least as important as what has happened in the technology itself. The fusion community could frequently talk about how incredible the industrial revolution was, and how it powered Britain to global dominance for two centuries. A new source of energy might do the same thing ! But this is more hype than we feel we ought to offer, and the community’s goal is not to create a dominant superpower.
Even if foom is going to happen, things would look very different if the leaders credibly committed to helping others foom if they are first. I don’t know if this would be better or worse from a existential risk perspective, but it would change the nature of the race a lot.
Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and technological transformation by cheaper and more abundant electricity. The scale for that transformation is measured in decades, and there are humans designing and permitting and building and operating each and every one, on human timescales. There’s no winner take all dynamic even if your leading competitor builds their first commercial plant five years before you do.
Also: We do have other credible paths that can also greatly increase access to comparably low-cost dispatchable clean power on a similar timescale of development, if we don’t get fusion.
Also true, which means the default assumption without it is that the scaling behavior looks like the scaling behavior for other successful software innovations. In software, the development costs are high and then the unit costs in deployment quickly fall to near zero. As long as AI benefits from collecting user data to improve training (which should still be true in many non-foom scenarios) then we might expect network effect scaling behavior where the first to really capture a market niche becomes almost uncatchable, like Meta and Google and Amazon. Or where downstream app layers are built on software functionality, switching costs become very high and you get a substantial amount of lock-in, like with Apple and Microsoft.
Agreed. But, if any of the leading labs could credibly state what kinds of things they would or wouldn’t be able to do in a foom scenario, let alone credibly precommit to what they would actually do, I would feel a whole lot better and safer about the possibility. Instead the leaders can’t even precommit credibly to their own stated policies, in the absence of foom, and also don’t have anywhere near a credible plan for managing foom if it happens.
Jeffrey, I appreciate your points about fusion’s potential, and the uncertainty around “foom.” However, I think framing this in terms of bottlenecks clarifies the core difference. The Industrial Revolution was transformative because it overcame the energy bottleneck. Today, while clean energy is vital, many transformative advancements are primarily bottlenecked by intelligence, not energy. Fusion addresses an important, existing constraint, but it’s a step removed from the frontier of capability. AI, particularly AGI, directly targets that intelligence bottleneck, potentially unlocking progress across virtually every domain limited by human cognitive capacity. This difference in which bottleneck is addressed makes the potential transformative impact, and thus the strategic landscape, fundamentally distinct. Even drastic cost reductions in energy don’t address the core limiting factor for progress in areas fundamentally constrained by our cognitive and analytical abilities.
This seems false given that AI training will be/is bottlenecked on energy.