Fascinating history, Mitchell! :) I share your confusion about why more EAs aren’t interested in Drexlerian nanotech, but are interested in AGI.
I would indeed guess that this is related to the deep learning revolution making AI-in-general feel more plausible/near/real, while we aren’t experiencing an analogous revolution that feels similarly relevant to nanotech. That is, I don’t think it’s mostly based on EAs having worked out inside-view models of how far off AGI vs. nanotech is.
I’d guess similar factors are responsible for EAs being less interested in whole-brain emulation? (Though in that case there are complicating factors like ‘ems have various conceptual and technological connections to AI’.)
Alternatively, it could be simple founder effects—various EA leaders do have various models saying ‘AGI is likely to come before nanotech or ems’, and then this shapes what the larger community tends to be interested in.
Specifically with respect to ‘gray goo’, i.e. nonbiological replicators that eat the ecosphere (keywords include ‘aerovore’ and ‘ecophagy’), it seems like it ought to be physically possible, and the only reason we don’t need to worry so much about diamondoid aerovores smothering the earth, is that for some reason, the diamondoid kind of nanotechnology has received very little research funding.
Dr. Drexler suggests that the nature of the technologies (essentially small-scale chemistry and mechanical devices) creates no risk from large scale unintended physical consequences of APM. In particular the popular “grey goo” scenario involving self-replicating, organism-like nanostructures has nothing to do with factory-style machinery used to implement APM systems. Dangerous products could be made with APM, but would have to be manufactured intentionally.
No one has a reason to build grey goo (outside of rare omnicidal crazy people), so it’s not worth worrying about, unless someday random crazy people can create arbitrary nanosystems in their background.
AGI is different because it introduces (very powerful) optimization in bad directions, without requiring any pre-existing ill intent to get the ball rolling.
Fascinating history, Mitchell! :) I share your confusion about why more EAs aren’t interested in Drexlerian nanotech, but are interested in AGI.
I would indeed guess that this is related to the deep learning revolution making AI-in-general feel more plausible/near/real, while we aren’t experiencing an analogous revolution that feels similarly relevant to nanotech. That is, I don’t think it’s mostly based on EAs having worked out inside-view models of how far off AGI vs. nanotech is.
I’d guess similar factors are responsible for EAs being less interested in whole-brain emulation? (Though in that case there are complicating factors like ‘ems have various conceptual and technological connections to AI’.)
Alternatively, it could be simple founder effects—various EA leaders do have various models saying ‘AGI is likely to come before nanotech or ems’, and then this shapes what the larger community tends to be interested in.
From Drexler’s conversation with Open Phil:
No one has a reason to build grey goo (outside of rare omnicidal crazy people), so it’s not worth worrying about, unless someday random crazy people can create arbitrary nanosystems in their background.
AGI is different because it introduces (very powerful) optimization in bad directions, without requiring any pre-existing ill intent to get the ball rolling.