I can’t point to anything concrete from Drexler, beyond him being much more cautious than Eliezer about predicting the speed of engineering projects.
Speaking more for myself than for Drexler, it seems unlikely that AI would speed up nanotech development more than 10x. Engineering new arrangements of matter normally has many steps that don’t get sped up by more intelligence.
The initial nanotech systems we could realistically build with current technology are likely dependent on unusually pure feedstocks, and still likely to break down frequently. So I expect multiple generations of design before nanotech becomes general-purpose enough to matter.
I expect that developing nanotech via human research would require something like $1 billion in thoughtfully spent resources. Significant fractions of that would involve experiments that would be done serially. Sometimes that’s because noise makes interactions hard to predict. Sometimes it’s due to an experiment needing a product from a prior experiment.
Observing whether an experiment worked is slow, because the tools for nanoscale images are extremely sensitive to vibration. Headaches like this seem likely to add up.
Chip litho (practical top-down nanotech) is already approaching the practical physical limits for non-exotic computers (and practical exotic computers seem harder/farther than cold fusion).
Biology is already at the key physical limits (thermodynamic efficiency) for nanoscale robotics. It doesn’t matter what materials you use to construct nanobots, they can’t have large advantages over bio cells, because bio cells are already near optimal in terms of the primary constraints (which are thermodynamic efficiency for copying and spatially arranging bits).
I basically agree with this take, assuming relatively conventional computers and no gigantic size computers like planet computers.
And yeah, I think Eliezer’s biggest issue on ideas like nanotechnology, and his general his approach of assuming most limitations away by future technology isn’t that they can’t happen, but that he ignores that getting to that abstracted future state takes a lot more time than Eliezer thinks, and that time matters more than Eliezer thinks, especially in AI safety. and generally requires more contentious assumptions than Eliezer thinks.
I can’t point to anything concrete from Drexler, beyond him being much more cautious than Eliezer about predicting the speed of engineering projects.
Speaking more for myself than for Drexler, it seems unlikely that AI would speed up nanotech development more than 10x. Engineering new arrangements of matter normally has many steps that don’t get sped up by more intelligence.
The initial nanotech systems we could realistically build with current technology are likely dependent on unusually pure feedstocks, and still likely to break down frequently. So I expect multiple generations of design before nanotech becomes general-purpose enough to matter.
I expect that developing nanotech via human research would require something like $1 billion in thoughtfully spent resources. Significant fractions of that would involve experiments that would be done serially. Sometimes that’s because noise makes interactions hard to predict. Sometimes it’s due to an experiment needing a product from a prior experiment.
Observing whether an experiment worked is slow, because the tools for nanoscale images are extremely sensitive to vibration. Headaches like this seem likely to add up.
Chip litho (practical top-down nanotech) is already approaching the practical physical limits for non-exotic computers (and practical exotic computers seem harder/farther than cold fusion).
Biology is already at the key physical limits (thermodynamic efficiency) for nanoscale robotics. It doesn’t matter what materials you use to construct nanobots, they can’t have large advantages over bio cells, because bio cells are already near optimal in terms of the primary constraints (which are thermodynamic efficiency for copying and spatially arranging bits).
I basically agree with this take, assuming relatively conventional computers and no gigantic size computers like planet computers.
And yeah, I think Eliezer’s biggest issue on ideas like nanotechnology, and his general his approach of assuming most limitations away by future technology isn’t that they can’t happen, but that he ignores that getting to that abstracted future state takes a lot more time than Eliezer thinks, and that time matters more than Eliezer thinks, especially in AI safety. and generally requires more contentious assumptions than Eliezer thinks.