We maybe need an introduction to all the advance work done on nanotechnology for everyone who didn’t grow up reading “Engines of Creation” as a twelve-year-old or “Nanosystems” as a twenty-year-old.
Ah. Yeah, that does sound like something LessWrong resources have been missing, then — and not just for my personal sake. Anecdotally, I’ve seen several why-I’m-an-AI-skeptic posts circulating on social media for whom “EY makes crazy leaps of faith about nanotech” was a key point of why they rejected the overall AI-risk argument.
(As it stands, my objection to your mini-summary would be that that sure, “blind” grey goo does trivially seem possible, but programmable/‘smart’ goo that seeks out e.g. computer CPUs in particular could be a whole other challenge, and a less obviously solvable one looking at bacteria. But maybe that “common-sense” distinction dissolves with a better understanding of the actual theory.)
Ah. Yeah, that does sound like something LessWrong resources have been missing, then — and not just for my personal sake. Anecdotally, I’ve seen several why-I’m-an-AI-skeptic posts circulating on social media for whom “EY makes crazy leaps of faith about nanotech” was a key point of why they rejected the overall AI-risk argument.
(As it stands, my objection to your mini-summary would be that that sure, “blind” grey goo does trivially seem possible, but programmable/‘smart’ goo that seeks out e.g. computer CPUs in particular could be a whole other challenge, and a less obviously solvable one looking at bacteria. But maybe that “common-sense” distinction dissolves with a better understanding of the actual theory.)