I don’t think anyone (e.g., at FHI or MIRI) is worried about human extinction via gray goo anymore.
The fate of the concept of nanotechnology has been a curious one. You had the Feynman/Heinlein idea of small machines making smaller machines until you get to atoms. There were multiple pathways towards control over individual atoms, from the usual chemical methods of bulk synthesis, to mechanical systems like atomic force microscopes.
But I think Eric Drexler’s biggest inspiration was simply molecular biology. The cell had been revealed as an extraordinary molecular structure whose parts included a database of designs (the genome) and a place of manufacture (the ribosome). What Drexler did in his books, was to take that concept, and imagine it being realized by something other than the biological chemistry of proteins and membranes and water. In particular, he envisaged rigid mechanical structures, often based on diamond (i.e. a lattice of carbons with a surface of hydrogen), often assembled in hard vacuum by factory-like nano-mechanisms, rather than grown in a fluid medium by redundant, fault-tolerant, stochastic self-assembly (as in the living cell).
Having seen this potential, he then saw this ‘nanotechnology’ as a way to do all kinds of transhuman things: make AI that is human-equivalent, but much smaller and faster (and hotter) than the human brain; grow a starship from a molecularly precise 3d printer in an afternoon; resurrect the cryonically suspended dead. And also, as a way to make replicating artificial life that could render the earth uninhabitable.
For many years, there was an influential futurist subculture around Drexler’s thought and his institute, the Foresight Institute. And nanotechnology made it was into SF pop culture, especially the idea of a ‘nanobot’. Nanobots are still there as an SF trope—and are sometimes cited as an inspiration in real research that involves some kind of controlled nanomechanical process—but I think it’s unquestionable that the hype that surrounded that nano-futurist community has greatly diminished, as the years kept passing without the occurrence of the “assembler breakthrough” (ability to make the nonbiological nano-manufacturing agents).
There is a definite sense in which I think Eliezer eventually took up a place in culture analogous to that once held by Eric Drexler. Drexler had articulated a techno-eschatology in which the entire future revolved around the rise of nanotechnology (and his core idea for how humanity could survive was to spread into space; he had other ideas too, but I’d say that’s the essence of his big-picture strategy), and it was underpinned not just by SF musings but also by nanomachine designs, complete with engineering calculations. With Eliezer, the crucial technology is artificial intelligence, the core idea is alignment versus extinction via (e.g.) paperclip maximizer, and the technical plausibility arguments come from computer science rather than physics.
Those who are suspicious of utopian and dystopian thought in general, including their technologically motivated forms, are happy to say that Drexler’s extreme nano-futurology faded because something about it was never possible, and that the same fate awaits Eliezer’s extreme AI-futurology. But as for me, I find the arguments in both cases quite logical. And that raises the question, even as we live through a rise in AI capabilities that is keeping Eliezer’s concerns very topical, why did Drexler’s nano-futurism fade… not just in the sense that e.g. the assembler breakthrough never became a recurring topic of public concern, the way that climate change did; but also in the sense that, e.g., you don’t see effective altruists worrying about the assembler breakthrough, and this is entirely because they are living in the 2020s; if effective altruism had existed in the 1990s, there’s little doubt that gray goo and nanowar would have been high in the list of existential risks.
Understanding what happened to Drexler’s nano-futurism requires understanding what kind of ‘nano’ or chemical progress has occurred since those days, and whether the failure of certain things to eventuate is because they are impossible, because not enough of the right people were interested, because the relevant research was starved of funds and suppressed (but then, by who, how, and why), or because it’s hard and we didn’t cross the right threshold yet, the way that artificial neural networks couldn’t really take off until the hardware for deep learning existed.
It seems clear that ‘nanotechnology’ in the form of everything biological, is still developing powerfully and in an uninhibited way. The Covid pandemic has actually given us a glimpse of what a war against a nano-replicator is like, in the era of a global information society with molecular tools. And gene editing, synthetic biology, organoids, all kinds of macabre cyborgian experiments on lab animals, etc, develop unabated in our biotech society.
As for the non-biological side… it was sometimes joked that ‘nanotechnology’ is just a synonym for ‘chemistry’. Obviously, the world of chemical experiment and technique, quantum manipulations of atoms, design of new materials—all that continues to progress too. So it seems that what really hasn’t happened, is that specific vision of assemblers, nanocomputers, and nanorobots made from diamond-like substances.
Again, one may say: it’s possible, it just hasn’t happened yet for some reason. The world of 2D carbon substances—buckyballs, buckytubes, graphenes—seems to me the closest that we’ve come so far. All that research is still developing, and perhaps it will eventually bootstrap its way to the Drexlerian level of nanotechnology, once the right critical thresholds are passed… Or, one might say that Eric’s vision (assemblers, nanocomputers, nanorobots) will come to pass, without even requiring “diamondoid” nanotechnology—instead it will happen via synthetic biology and/or other chemical pathways.
My own opinion is that the diamondoid nanotechnology seems like it should be possible, but I wonder about its biocompatibility—a crucial theme in the nanomedical research of Robert Freitas, who was the champion of medical applications as envisaged by Drexler. I am just skeptical about the capacity of such systems to be useful in a biochemical environment. Speaking of astronomically sized intelligences, Stanislaw Lem once wrote that “only a star can survive among stars”, meaning that such intelligences should have superficial similarities to natural celestial bodies, because they are shaped by a common physical regime; and perhaps biomedically useful nanomachines must necessarily resemble and operate like the protein complexes of natural biology, because they have to work in that same regime of soluble biopolymers.
Specifically with respect to ‘gray goo’, i.e. nonbiological replicators that eat the ecosphere (keywords include ‘aerovore’ and ‘ecophagy’), it seems like it ought to be physically possible, and the only reason we don’t need to worry so much about diamondoid aerovores smothering the earth, is that for some reason, the diamondoid kind of nanotechnology has received very little research funding.
Fascinating history, Mitchell! :) I share your confusion about why more EAs aren’t interested in Drexlerian nanotech, but are interested in AGI.
I would indeed guess that this is related to the deep learning revolution making AI-in-general feel more plausible/near/real, while we aren’t experiencing an analogous revolution that feels similarly relevant to nanotech. That is, I don’t think it’s mostly based on EAs having worked out inside-view models of how far off AGI vs. nanotech is.
I’d guess similar factors are responsible for EAs being less interested in whole-brain emulation? (Though in that case there are complicating factors like ‘ems have various conceptual and technological connections to AI’.)
Alternatively, it could be simple founder effects—various EA leaders do have various models saying ‘AGI is likely to come before nanotech or ems’, and then this shapes what the larger community tends to be interested in.
Specifically with respect to ‘gray goo’, i.e. nonbiological replicators that eat the ecosphere (keywords include ‘aerovore’ and ‘ecophagy’), it seems like it ought to be physically possible, and the only reason we don’t need to worry so much about diamondoid aerovores smothering the earth, is that for some reason, the diamondoid kind of nanotechnology has received very little research funding.
Dr. Drexler suggests that the nature of the technologies (essentially small-scale chemistry and mechanical devices) creates no risk from large scale unintended physical consequences of APM. In particular the popular “grey goo” scenario involving self-replicating, organism-like nanostructures has nothing to do with factory-style machinery used to implement APM systems. Dangerous products could be made with APM, but would have to be manufactured intentionally.
No one has a reason to build grey goo (outside of rare omnicidal crazy people), so it’s not worth worrying about, unless someday random crazy people can create arbitrary nanosystems in their background.
AGI is different because it introduces (very powerful) optimization in bad directions, without requiring any pre-existing ill intent to get the ball rolling.
And that raises the question, even as we live through a rise in AI capabilities that is keeping Eliezer’s concerns very topical, why did Drexler’s nano-futurism fade...
One view I’ve seen is that perverse incentives did it. Widespread interest in nanotechnology led to governmental funding of the relevant research, which caused a competition within academic circles over that funding, and discrediting certain avenues of research was an easier way to win the competition than actually making progress. To quote:
Hall blames public funding for science. Not just for nanotech, but for actually hurting progress in general. (I’ve never heard anyone before say government-funded science was bad for science!) “[The] great innovations that made the major quality-of-life improvements came largely before 1960: refrigerators, freezers, vacuum cleaners, gas and electric stoves, and washing machines; indoor plumbing, detergent, and deodorants; electric lights; cars, trucks, and buses; tractors and combines; fertilizer; air travel, containerized freight, the vacuum tube and the transistor; the telegraph, telephone, phonograph, movies, radio, and television—and they were all developed privately.” “A survey and analysis performed by the OECD in 2005 found, to their surprise, that while private R&D had a positive 0.26 correlation with economic growth, government funded R&D had a negative 0.37 correlation!” “Centralized funding of an intellectual elite makes it easier for cadres, cliques, and the politically skilled to gain control of a field, and they by their nature are resistant to new, outside, non-Ptolemaic ideas.” This is what happened to nanotech; there was a huge amount of buzz, culminating in $500 million dollars of funding under Clinton in 1990. This huge prize kicked off an academic civil war, and the fledgling field of nanotech lost hard to the more established field of material science. Material science rebranded as “nanotech”, trashed the reputation of actual nanotech (to make sure they won the competition for the grant money), and took all the funding for themselves. Nanotech never recovered.
One wonders if similar institutional sabotage of AI research is possible, but we’re probably past the point where that might’ve worked (if that even was what did nanotech in).
I guess I missed the term gray goo. I apologize for this and for my bad English. Is it possible to replace it on the ‘using nanotechnologies to attain a decisive strategic advantage’? I mean the discussion of the prospects for nanotechnologies on SL4 20+ years ago. This is especially:
My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.
I understand that since then the views of EY have changed in many ways. But I am interested in the views of experts on the possibility of using nanotechnology for those scenarios that he implies now. That little thing I found.
I don’t think anyone (e.g., at FHI or MIRI) is worried about human extinction via gray goo anymore.
Like, they expected nanotech to come sooner? Or something else? (What did they say, and where?)
The fate of the concept of nanotechnology has been a curious one. You had the Feynman/Heinlein idea of small machines making smaller machines until you get to atoms. There were multiple pathways towards control over individual atoms, from the usual chemical methods of bulk synthesis, to mechanical systems like atomic force microscopes.
But I think Eric Drexler’s biggest inspiration was simply molecular biology. The cell had been revealed as an extraordinary molecular structure whose parts included a database of designs (the genome) and a place of manufacture (the ribosome). What Drexler did in his books, was to take that concept, and imagine it being realized by something other than the biological chemistry of proteins and membranes and water. In particular, he envisaged rigid mechanical structures, often based on diamond (i.e. a lattice of carbons with a surface of hydrogen), often assembled in hard vacuum by factory-like nano-mechanisms, rather than grown in a fluid medium by redundant, fault-tolerant, stochastic self-assembly (as in the living cell).
Having seen this potential, he then saw this ‘nanotechnology’ as a way to do all kinds of transhuman things: make AI that is human-equivalent, but much smaller and faster (and hotter) than the human brain; grow a starship from a molecularly precise 3d printer in an afternoon; resurrect the cryonically suspended dead. And also, as a way to make replicating artificial life that could render the earth uninhabitable.
For many years, there was an influential futurist subculture around Drexler’s thought and his institute, the Foresight Institute. And nanotechnology made it was into SF pop culture, especially the idea of a ‘nanobot’. Nanobots are still there as an SF trope—and are sometimes cited as an inspiration in real research that involves some kind of controlled nanomechanical process—but I think it’s unquestionable that the hype that surrounded that nano-futurist community has greatly diminished, as the years kept passing without the occurrence of the “assembler breakthrough” (ability to make the nonbiological nano-manufacturing agents).
There is a definite sense in which I think Eliezer eventually took up a place in culture analogous to that once held by Eric Drexler. Drexler had articulated a techno-eschatology in which the entire future revolved around the rise of nanotechnology (and his core idea for how humanity could survive was to spread into space; he had other ideas too, but I’d say that’s the essence of his big-picture strategy), and it was underpinned not just by SF musings but also by nanomachine designs, complete with engineering calculations. With Eliezer, the crucial technology is artificial intelligence, the core idea is alignment versus extinction via (e.g.) paperclip maximizer, and the technical plausibility arguments come from computer science rather than physics.
Those who are suspicious of utopian and dystopian thought in general, including their technologically motivated forms, are happy to say that Drexler’s extreme nano-futurology faded because something about it was never possible, and that the same fate awaits Eliezer’s extreme AI-futurology. But as for me, I find the arguments in both cases quite logical. And that raises the question, even as we live through a rise in AI capabilities that is keeping Eliezer’s concerns very topical, why did Drexler’s nano-futurism fade… not just in the sense that e.g. the assembler breakthrough never became a recurring topic of public concern, the way that climate change did; but also in the sense that, e.g., you don’t see effective altruists worrying about the assembler breakthrough, and this is entirely because they are living in the 2020s; if effective altruism had existed in the 1990s, there’s little doubt that gray goo and nanowar would have been high in the list of existential risks.
Understanding what happened to Drexler’s nano-futurism requires understanding what kind of ‘nano’ or chemical progress has occurred since those days, and whether the failure of certain things to eventuate is because they are impossible, because not enough of the right people were interested, because the relevant research was starved of funds and suppressed (but then, by who, how, and why), or because it’s hard and we didn’t cross the right threshold yet, the way that artificial neural networks couldn’t really take off until the hardware for deep learning existed.
It seems clear that ‘nanotechnology’ in the form of everything biological, is still developing powerfully and in an uninhibited way. The Covid pandemic has actually given us a glimpse of what a war against a nano-replicator is like, in the era of a global information society with molecular tools. And gene editing, synthetic biology, organoids, all kinds of macabre cyborgian experiments on lab animals, etc, develop unabated in our biotech society.
As for the non-biological side… it was sometimes joked that ‘nanotechnology’ is just a synonym for ‘chemistry’. Obviously, the world of chemical experiment and technique, quantum manipulations of atoms, design of new materials—all that continues to progress too. So it seems that what really hasn’t happened, is that specific vision of assemblers, nanocomputers, and nanorobots made from diamond-like substances.
Again, one may say: it’s possible, it just hasn’t happened yet for some reason. The world of 2D carbon substances—buckyballs, buckytubes, graphenes—seems to me the closest that we’ve come so far. All that research is still developing, and perhaps it will eventually bootstrap its way to the Drexlerian level of nanotechnology, once the right critical thresholds are passed… Or, one might say that Eric’s vision (assemblers, nanocomputers, nanorobots) will come to pass, without even requiring “diamondoid” nanotechnology—instead it will happen via synthetic biology and/or other chemical pathways.
My own opinion is that the diamondoid nanotechnology seems like it should be possible, but I wonder about its biocompatibility—a crucial theme in the nanomedical research of Robert Freitas, who was the champion of medical applications as envisaged by Drexler. I am just skeptical about the capacity of such systems to be useful in a biochemical environment. Speaking of astronomically sized intelligences, Stanislaw Lem once wrote that “only a star can survive among stars”, meaning that such intelligences should have superficial similarities to natural celestial bodies, because they are shaped by a common physical regime; and perhaps biomedically useful nanomachines must necessarily resemble and operate like the protein complexes of natural biology, because they have to work in that same regime of soluble biopolymers.
Specifically with respect to ‘gray goo’, i.e. nonbiological replicators that eat the ecosphere (keywords include ‘aerovore’ and ‘ecophagy’), it seems like it ought to be physically possible, and the only reason we don’t need to worry so much about diamondoid aerovores smothering the earth, is that for some reason, the diamondoid kind of nanotechnology has received very little research funding.
Fascinating history, Mitchell! :) I share your confusion about why more EAs aren’t interested in Drexlerian nanotech, but are interested in AGI.
I would indeed guess that this is related to the deep learning revolution making AI-in-general feel more plausible/near/real, while we aren’t experiencing an analogous revolution that feels similarly relevant to nanotech. That is, I don’t think it’s mostly based on EAs having worked out inside-view models of how far off AGI vs. nanotech is.
I’d guess similar factors are responsible for EAs being less interested in whole-brain emulation? (Though in that case there are complicating factors like ‘ems have various conceptual and technological connections to AI’.)
Alternatively, it could be simple founder effects—various EA leaders do have various models saying ‘AGI is likely to come before nanotech or ems’, and then this shapes what the larger community tends to be interested in.
From Drexler’s conversation with Open Phil:
No one has a reason to build grey goo (outside of rare omnicidal crazy people), so it’s not worth worrying about, unless someday random crazy people can create arbitrary nanosystems in their background.
AGI is different because it introduces (very powerful) optimization in bad directions, without requiring any pre-existing ill intent to get the ball rolling.
One view I’ve seen is that perverse incentives did it. Widespread interest in nanotechnology led to governmental funding of the relevant research, which caused a competition within academic circles over that funding, and discrediting certain avenues of research was an easier way to win the competition than actually making progress. To quote:
Source: this review of Where’s My Flying Car?
One wonders if similar institutional sabotage of AI research is possible, but we’re probably past the point where that might’ve worked (if that even was what did nanotech in).
I guess I missed the term gray goo. I apologize for this and for my bad English.
Is it possible to replace it on the ‘using nanotechnologies to attain a decisive strategic advantage’?
I mean the discussion of the prospects for nanotechnologies on SL4 20+ years ago. This is especially:
I understand that since then the views of EY have changed in many ways. But I am interested in the views of experts on the possibility of using nanotechnology for those scenarios that he implies now. That little thing I found.
Makes sense, thanks for the reference! :)