I believe this because of how the world looks “brittle” (e.g., nanotech exists) and because lots of technological progress seems cognition-constrained (such as, again, nanotech). This is a big part of why I think heavy-precedent-style justifications are doomed.
Apart from nanotech, what are the main examples or arguments you would cite in favor of these claims?
Separately, how close is your conception of nanotech to “atomically precise manufacturing”, which seems like Drexler’s preferred framing right now?
not Nate or a military historian, but to me it seems pretty likely for a ~100 human-years more technologically advanced actor to get decisive strategic advantage over the world.
In military history it seems pretty common for some tech advance to cause one side to get a big advantage. This seems to be true today as well with command-and-control and various other capabilities
I would guess pure fusion weapons are technologically possible, which means an AI sophisticated enough to design one can get nukes without uranium
Currently on the cutting edge, the most advanced actors have large multiples over everyone else in important metrics. This is due to either a few years’ lead or better research practices still within the human range
SMIC is mass producing the 14nm node whereas Samsung is at 3nm, which is something like 5x better FLOPS/watt
algorithmic improvements driven by cognitive labor of ML engineers have caused multiple OOM improvement in value/FLOPS
SpaceX gets 10x better cost per ton to orbit than the next cheapest space launch provider, and this is before Starship. Also their internal costs are lower
This seems sufficient for “what failure looks like” scenarios, with faster disempowerment through hard takeoff likely to depend on other pathways like nanotech, social engineering, etc. As for the whole argument against “heavy precedent”, I’m not convinced either way and haven’t thought about it a ton.
One way in which the world seems brittle / having free energy AI could use to gain advantage:
We haven’t figured out good communication practices for the digital age. We don’t have good collective epistemics. And we dont seem to be on track to have this solved in the next 20 years. As a result I expect that with enough compute and understanding of network science, and perhaps a couple more things, you could sabotage the whole civilization. (“Enough” is meant to stand for “a lot, but within reach of an early AGI”. Heck, if Google somehow spent the next 5 years just on that, I would give them fair odds.)
Apart from nanotech, what are the main examples or arguments you would cite in favor of these claims?
Separately, how close is your conception of nanotech to “atomically precise manufacturing”, which seems like Drexler’s preferred framing right now?
not Nate or a military historian, but to me it seems pretty likely for a ~100 human-years more technologically advanced actor to get decisive strategic advantage over the world.
In military history it seems pretty common for some tech advance to cause one side to get a big advantage. This seems to be true today as well with command-and-control and various other capabilities
I would guess pure fusion weapons are technologically possible, which means an AI sophisticated enough to design one can get nukes without uranium
Currently on the cutting edge, the most advanced actors have large multiples over everyone else in important metrics. This is due to either a few years’ lead or better research practices still within the human range
SMIC is mass producing the 14nm node whereas Samsung is at 3nm, which is something like 5x better FLOPS/watt
algorithmic improvements driven by cognitive labor of ML engineers have caused multiple OOM improvement in value/FLOPS
SpaceX gets 10x better cost per ton to orbit than the next cheapest space launch provider, and this is before Starship. Also their internal costs are lower
This seems sufficient for “what failure looks like” scenarios, with faster disempowerment through hard takeoff likely to depend on other pathways like nanotech, social engineering, etc. As for the whole argument against “heavy precedent”, I’m not convinced either way and haven’t thought about it a ton.
One way in which the world seems brittle / having free energy AI could use to gain advantage: We haven’t figured out good communication practices for the digital age. We don’t have good collective epistemics. And we dont seem to be on track to have this solved in the next 20 years. As a result I expect that with enough compute and understanding of network science, and perhaps a couple more things, you could sabotage the whole civilization. (“Enough” is meant to stand for “a lot, but within reach of an early AGI”. Heck, if Google somehow spent the next 5 years just on that, I would give them fair odds.)