Thanks so much for this post, I’ve been wishing for something like this for a long time. I kept hearing people grumbling about how EY & Drexler were way too bullish about nanotech, but no one had any actual arguments. Now we have arguments & a comment section. :)
I object to the implication that Eliezer and Drexler have similar positions. Eliezer seems to seriously underestimate how hard nanotech is. Drexler has been pretty cautious about predicting how much research it would require.
Huh, interesting. I am skeptical. Drexler seems to have thought that ordinary human scientists could get to nanotech in his lifetime, if they made a great effort. Unless he’s changed his mind about that, that means he agrees with Yudkowsky about nanotech, I think. (As I interpret him, Yudkowsky takes that claim and then adds the additional hypothesis that, in general, superintelligences will be able to do research several OOMs faster than human science, and thus e.g. “thirty years” becomes “a few days.” If Drexler disagrees with this, fine, but it’s not a disagreement about nanotech it’s a disagreement about superintelligence.)
I can’t point to anything concrete from Drexler, beyond him being much more cautious than Eliezer about predicting the speed of engineering projects.
Speaking more for myself than for Drexler, it seems unlikely that AI would speed up nanotech development more than 10x. Engineering new arrangements of matter normally has many steps that don’t get sped up by more intelligence.
The initial nanotech systems we could realistically build with current technology are likely dependent on unusually pure feedstocks, and still likely to break down frequently. So I expect multiple generations of design before nanotech becomes general-purpose enough to matter.
I expect that developing nanotech via human research would require something like $1 billion in thoughtfully spent resources. Significant fractions of that would involve experiments that would be done serially. Sometimes that’s because noise makes interactions hard to predict. Sometimes it’s due to an experiment needing a product from a prior experiment.
Observing whether an experiment worked is slow, because the tools for nanoscale images are extremely sensitive to vibration. Headaches like this seem likely to add up.
Chip litho (practical top-down nanotech) is already approaching the practical physical limits for non-exotic computers (and practical exotic computers seem harder/farther than cold fusion).
Biology is already at the key physical limits (thermodynamic efficiency) for nanoscale robotics. It doesn’t matter what materials you use to construct nanobots, they can’t have large advantages over bio cells, because bio cells are already near optimal in terms of the primary constraints (which are thermodynamic efficiency for copying and spatially arranging bits).
I basically agree with this take, assuming relatively conventional computers and no gigantic size computers like planet computers.
And yeah, I think Eliezer’s biggest issue on ideas like nanotechnology, and his general his approach of assuming most limitations away by future technology isn’t that they can’t happen, but that he ignores that getting to that abstracted future state takes a lot more time than Eliezer thinks, and that time matters more than Eliezer thinks, especially in AI safety. and generally requires more contentious assumptions than Eliezer thinks.
Thanks so much for this post, I’ve been wishing for something like this for a long time. I kept hearing people grumbling about how EY & Drexler were way too bullish about nanotech, but no one had any actual arguments. Now we have arguments & a comment section. :)
I object to the implication that Eliezer and Drexler have similar positions. Eliezer seems to seriously underestimate how hard nanotech is. Drexler has been pretty cautious about predicting how much research it would require.
Huh, interesting. I am skeptical. Drexler seems to have thought that ordinary human scientists could get to nanotech in his lifetime, if they made a great effort. Unless he’s changed his mind about that, that means he agrees with Yudkowsky about nanotech, I think. (As I interpret him, Yudkowsky takes that claim and then adds the additional hypothesis that, in general, superintelligences will be able to do research several OOMs faster than human science, and thus e.g. “thirty years” becomes “a few days.” If Drexler disagrees with this, fine, but it’s not a disagreement about nanotech it’s a disagreement about superintelligence.)
Can you say more about what you mean?
I can’t point to anything concrete from Drexler, beyond him being much more cautious than Eliezer about predicting the speed of engineering projects.
Speaking more for myself than for Drexler, it seems unlikely that AI would speed up nanotech development more than 10x. Engineering new arrangements of matter normally has many steps that don’t get sped up by more intelligence.
The initial nanotech systems we could realistically build with current technology are likely dependent on unusually pure feedstocks, and still likely to break down frequently. So I expect multiple generations of design before nanotech becomes general-purpose enough to matter.
I expect that developing nanotech via human research would require something like $1 billion in thoughtfully spent resources. Significant fractions of that would involve experiments that would be done serially. Sometimes that’s because noise makes interactions hard to predict. Sometimes it’s due to an experiment needing a product from a prior experiment.
Observing whether an experiment worked is slow, because the tools for nanoscale images are extremely sensitive to vibration. Headaches like this seem likely to add up.
Chip litho (practical top-down nanotech) is already approaching the practical physical limits for non-exotic computers (and practical exotic computers seem harder/farther than cold fusion).
Biology is already at the key physical limits (thermodynamic efficiency) for nanoscale robotics. It doesn’t matter what materials you use to construct nanobots, they can’t have large advantages over bio cells, because bio cells are already near optimal in terms of the primary constraints (which are thermodynamic efficiency for copying and spatially arranging bits).
I basically agree with this take, assuming relatively conventional computers and no gigantic size computers like planet computers.
And yeah, I think Eliezer’s biggest issue on ideas like nanotechnology, and his general his approach of assuming most limitations away by future technology isn’t that they can’t happen, but that he ignores that getting to that abstracted future state takes a lot more time than Eliezer thinks, and that time matters more than Eliezer thinks, especially in AI safety. and generally requires more contentious assumptions than Eliezer thinks.