Isn’t life an example of self-assembling molecular nanotechnology? If life exists, then our physics allows for programmable systems which use similar processes.
We already have turing complete molecular computers… but they’re currently too slow and expensive for practical use. I predict self-assembling nanotech programmed with a library of robust modular components will happen long before strong AI.
Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas). That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it’s doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses or protein crystals everything is built more or less according to a recipe of interacting forces and emergent behavior (rather than having something like a digital blueprint).
So yeah, remarkable things are possible, most likely even including things that naturally-evolved life does not do now. But there are limits and it probably does not resemble the sorts of things described in “Nanosystems” and its ilk at all.
a template of the sorts of things that are actually possible
Was this true at the macroscale too? The jet flying over my head says “no”. Artificial designs can have different goals than living systems, and are not constrained by the need to evolve via a nearly-continuous path of incremental fitness improvements from abiogenesis-capable ancestor molecules, and this turned out to make a huge difference in what was possible.
I’m also skeptical about the extent of what may be possible, but your examples don’t really add to that skepticism. Two examples (systems that evolved from random mutations don’t have ECC to prevent random mutations; systems that evolved from aquatic origins do most of their work in aqueous solution) are actually reasons for expecting a wider range of possibilities in designed vs evolved systems; one (dynamic systems may not be statically stable) is true at the macroscale too, and one (genetic code is vastly less transparent than computer code) is a reason to expect MNT to involve very difficult problems, but not necessary a reason to expect very underwhelming solutions.
Biology didn’t evolve to take advantage of ridiculously concentrated energy sources like fossil petroleum, or to major industrial infrastructure, two things that make jets possible. This is similar to some of the reasons I think that synthetic molecular technology will probably be capable of things that biology isn’t, by taking advantage of say electricity as an energy source or one-off batch synthesis of stuff by bringing together systems that are not self-replicating from parts made separately.
In fact the analogy of a bird to a jet might be apt to describe the differences between what a synthetic system could do and what biological systems do now, due to them using different energy sources and non-self replicating components (though it might be a lot harder to brute-force such a change in quantitative performance by ridiculous application of huge amounts of energy at low efficiency).
I still suspect, however, that when you are looking at the sorts of reactions that can be done and patterns that can be made in quantities that matter as more than curiosities or rare expensive fragile demonstrations, you will be dealing with more statistical reactions than precise engineering and dynamic systems rather than static (at least during the building process) just because of the nature of matter at this scale.
What do you make of the picture Richard Jones paints ? I’m not much more than a lay man—though happen to know my way around medicine—find his critique of of Drexler’s vision of nanotechnology sound.
His position seems to be that Drexler-style nanotechnology is theoretically possible, but that developing it would be very difficult.
I do not think that Drexler’s alternative approach – based
on mechanical devices made from rigid materials – fundamentally contradicts any physical laws, but I fear that its
proponents underestimate the problems that certain features
of the nanoworld will pose for it. The close tolerances that
we take for granted in macroscopic engineering will be very
difficult to achieve at the nano-scale because the machines
will be shaken about so much by Brownian motion. Finding
ways for surfaces to slide past each other without sticking
together or feeling excessive friction is going to be difficult.
A hypothetical superintelligence might find it easier...
Yes, that seems to be is main argument against Drexler’s vision, though I wonder if he thinks it’s difficult to come up with a design that would be robust, or if the kind of nanotechnology would be difficult to implement since it requires certain conditions such as vacuum close to 0 kelvin, which might be a bit cumbersome even for a superintelligence(?) unless you hang out a lot in space.
Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas).
Except that Drexlerian ideas are very alien compared to life, and are also physically possible (according to Nanosystems).
That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it’s doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses or protein crystals everything is built more or less according to a recipe of interacting forces and emergent behavior (rather than having something like a digital blueprint).
You are generalizing to all of physics from the narrow band of biochemistry. Biochemistry is aqueous, solvent-based, room-temperature-range, and evolved. It is not comparable to e.g. printed circuitry on a silicon chip.
So yeah, remarkable things are possible, most likely even including things that naturally-evolved life does not do now. But there are limits and it probably does not resemble the sorts of things described in “Nanosystems” and its ilk at all.
There are sure to be limits. However, the limits are probably nothing like those of life. Life is kind of useful to point to as an example of how self-replicating systems can exist, but apart from that it is a very misleading analogy. (At least, if we’re talking about hard nanotech, which is what MNT usually is used to refer to and what Drexler focuses on. Soft nanotech that mimics or borrows from biology is incredibly interesting, but different.)
To what extent is labeling the behavior of biological systems as “emergent” just an admission that these systems are currently mysterious to us?
I don’t think it’s clear to what extent biological systems have “emergent” behavior, vs. organization into distinct “modules” each with a specific role, and robust feedback systems.
The book chapter On Modules and Modularity in the book System Modeling in Cellular Biology argues that simple modular design is likely selected for, as it would increase the ability of an organism to evolve and adapt. Non-modular systems are so interconnected that small changes break too many things. Biological systems may be more modular (and therefore understandable) than they currently seem- but we’ll need to extend our study to look more at dynamic behavior before we’d be able to identify these modules and understand their function.
The fact that biological systems are so reliable despite high error rates in the underlying processes shows that feedback systems are an effective strategy to build robust systems from somewhat unreliable components.
I think this suggests that we may not need to solve hard problems such as protein folding before we can build practical self assembling nanotech. We “just” need a programmable library of robust modules that can be combined in arbitrary ways- but we may find that these already exist, or that they can be engineered from what we do understand (such as RNA chemistry).
I don’t seem to have the same disdain for the word ‘emergent’ as much of the population here. I don’t use it as a curiosity stopper or in place of the word ‘mysterious’ - I wouldn’t be much of a biologist if a little emergent behavior stopped me cold. (Also no argument about many modular things in biological systems, I pull out and manipulate pathways and regulatory circuits regularly in my work, but there is a whole lot which is still very context-dependent). In this context I used the word emergent to mean that rather than having some kind of map of the final structure embedded in the genetic instructions, you have them specifying the properties of small elements which then produce those larger structures only in the context of their interactions with each other and which produce a lot more structure than is actually encoded in the DNA via the rather-opaque ‘decompression algorithm’ of physics and chemistry (through which small simple changes to the elements can map to almost no change in the product or vast changes across multiple attributes). I’ve always found the analogy of genetics to a blueprint or algorithm tiresome and the analogy to a food recipe much more applicable; nothing in a recipe dictates things like, say, fluffyness other than the interactions of everything you put in in the context of an oven. You can alter biological systems in numerous ways with some regularity but only in some cases are there simple knobs you can turn to alter isolated attributes.
I mostly agree with your last two paragraphs, actually. Synthetic systems with properties similar to things like RNA or protein chemistry may eventually have a lot of power especially if they contain chemical properties not present in any of the basic building blocks of biology. They just will not have atomic-scale precision or arbitrary control over matter, and will be limited by things analogous to nutrients and metabolisms and either require a hell of a lot of functionality not directly connected to their main functions to hold themselves together or a lot of external infrastructure to make disposable things.
I really like your recipe analogy, I think it would be very useful for teaching molecular biology.
I think our discussion mirrors the tension between traditional biology and bioengineering. As a bioengineer I’m primarily concerned with what is possible to build given the biology we already know.
While I agree that a “blueprint” isn’t a good analogy for naturally evolved living organisms, this doesn’t prevent us from engineering new molecular systems that are built from a blueprint. As I mentioned, we already have turing complete molecular computers- and software compilers that can turn any code into a set of molecules that will perform the computation. It’s currently too slow and expensive to be useful, but it shows that programmable molecular systems are possible.
Nanosystems discusses theoretical maximums. However, even if you make the assumption that living cells are as good as it gets, an e-coli, which we know from extensive analysis uses around 25,000 moving parts, can double itself in 20 minutes.
So in theory, you have some kind of nano-robotic system that is able to build stuff. Probably not any old stuff—but it could produce tiny subunits that can be assembled to make other nano-robotic systems, and other similar things.
And if it ran as fast as an e-coli, it could build itself every 20 minutes.
That’s still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.
That’s still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.
That looks like it’s missing the point to me. As one of my physics professors put it, “we already have grey goo. It’s called bacteria.” If living cells are as good as it gets, and e-coli didn’t tear apart the Earth, that’s solid evidence that nanosystems won’t tear apart the Earth.
I’d say life is very near to as good as it gets in terms of moving around chemical energy and using it to transform materials without something like a furnace or a foundry. You’re never going to eat rock, it’s already in a pretty damn low energy state that you cannot use for energy. Lithotrophic bacteria take advantage of redox differences between materials in rocks and live REALLY slowly so that new materials can leech in. You need to apply external energy to it in order to transform it. And as TheOtherDave has said, major alterations have happened but according to rather non-grey-goo patterns, and I suspect that the sorts of large-scale (as opposed to a side-branch that some energy takes) reactions will be more similar to biological transformations than to other possibilities.
I do think that life is not necessarily as good as it gets in terms of production of interesting bulk materials or photosynthesis though because in both these cases we can take advantage of non-self-replicating (on its own) infrastructure to help things along. Imagine a tank in which electrodes coming from photovoltaics (hopefully made of something better than the current heavy-metal doped silicon that could easily be recycled or degraded or something when they inevitably photodegrade) directly drive the redox reactions that fix CO2 from the air into organic molecules, followed by the chemistry required to take that feedstock and make it into an interesting material (along with an inevitable waste product or six). Dropper in the appropriate nutrient/vitamin-analogues and let it run, then purify it… I sometimes wonder if such a system might in the long run cause an ‘ecological’ disruption by being more efficient at creating materials from simple feedstocks than regular living plants and over very long timescales crowding them out, but then there is the issue of the non-self-replicating components which add a drag. Its a very interesting and potentially strange set of scenarios to be sure, but yeah not exactly grey goo (grey sprawl?).
EDIT: Percival Zhang’s research at Virginia Tech may provide a look at some of the ideas I find particularly interesting:
I’d be really surprised if evolution has done all it can. We simply don’t know enough to say what might turn up in the next million years or ten million years.
Bacteria, as well as all life, are stuck at a local maximum because evolution cannot find optimal solutions. Part of Drexler’s work is to estimate what the theoretical optimum solutions can do.
My statement “tear apart planets” assumed too much knowledge on the part of the reader. I thought it was frankly pretty obvious. If you have a controllable piece of industrial machinery that uses electricity and can process common elements into copies of itself, but runs no faster than bacteria, tearing apart a planet is a straightforward engineering excercise. I did NOT mean the machinery looked like bacteria in any way, merely that it could copy itself no faster than bacteria.
And by “copy itself”, what I really meant is that given supplies of feedstock (bacteria need sugar, water, and a few trace elements...our “nanomachinery” would need electricity, and a supply of intermediates for every element you are working with in a pure form) it can arrange that feedstock into thousands of complex machine parts, such that the machinery that is doing this process can make it’s own mass in atomically perfect products in an hour.
I’ll leave it up to you to figure out how you could use this tech to take a planet apart in a few decades. I don’t mean a sci-fi swarm of goo, I mean an organized effort resembling a modern mine or construction site.
It’s not clear to me what you mean by “tearing apart a planet.” Are you sifting out most of the platinum and launching it into orbit? Turning it into asteroids? Rendering the atmosphere inhospitable to humans?
Because I agree that the last is obviously possible, the first probably possible, the second probably impossible without ludicrous expenditures of effort. But it’s not clear to me that any of those are things which nanotechnology would be the core enabler on.
If you mean something like “reshape the planet in its image,” then again I think bacteria are a good judge of feasibility- because of the feedstock issues. As well, it eventually becomes more profitable to prey on the nanomachines around you than the inert environment, and so soon we have an ecosystem a biologist would find familiar.
Jumping to another description, we could talk about “revolutionary technologies,” like the Haber-Bosch process, which consumes about 1% of modern energy usage and makes agriculture and industry possible on modern scales. It’s a chemical trick that extracts nitrogen from its inert state in the atmosphere and puts it into more useful forms like ammonia. Nanotech may make many tricks like that much more available and ubiquitous, but I think it will be a somewhat small addition to current biological and chemical industries, rather than a total rewriting of those fields.
This problem is very easy to solve using induction. Base step : the minimum “replicative subunit”. For life, that is usually a single cell. For nano-machinery, it is somewhat larger. For the sake of penciling in numbers, suppose you need a robot with a scoop and basic mining tools, a vacuum chamber, a 3d printer able to melt metal powder, a nanomachinery production system that is itself composed of nanomachinery, a plasma furnace, a set of pipes and tubes and storage tanks for producing the feedstock the nanomachinery needs, and a power source.
All in all, you could probably fit a single subunit into the size and mass of a greyhound bus. One notable problem is that there’s enough complexity here that current software could probably not keep a factory like this running forever because eventually something would break that it doesn’t know how to fix.
Anyways, you set down this subunit on a planet. It goes to work. In an hour, the nanomachinery subunit has made a complete copy of itself. In somewhat more time, it has to manufacture a second copy of everything else. The nanomachinery subunit makes all the high end stuff—the sensors, the circuitry, the bearings—everything complex, while the 3d printer makes all the big parts.
Pessimistically, this takes a week. A greyhound bus is 9x45 feet, and there are 5.5e15 square feet on the earth’s surface. To cover the whole planet’s surface would therefore take 44 weeks.
Now you need to do something with all the enormous piles of waste material (stuff you cannot make more subunits with) and un-needed materials. So you reallocate some of the 1.3e13 robotic systems to build electromagnetic launchers to fling the material into orbit. You also need to dispose of the atmosphere at some point, since all that air causes each electromagnetic launch to lose energy as friction, and waste heat is a huge problem. (my example isn’t entirely fair, I suspect that waste heat would cook everything before 44 weeks passed). So you build a huge number of stations that either compress the atmosphere or chemically bond the gasses to form solids.
With the vast resources in orbit, you build a sun-shade to stop all solar input to reduce the heat problem, and perhaps you build giant heat radiators in space and fling cold heat sinks to the planet or something. (with no atmospheric friction and superconductive launchers, this might work). You can also build giant solar arrays and beam microwave power down to the planet to supply the equipment so that each subunit no longer needs a nuclear reactor.
Once the earth’s crust is gone, what do you do about the rest of the planet’s mass? Knock molten globules into orbit by bombarding the planet with high energy projectiles? Build some kind of heat resistant containers that you launch into space full of lava? I don’t know. But at this point you have converted the entire earth’s crust into machines or waste piles to work with.
This is also yet another reason that AI is part of the puzzle. Even if failures were rare, there probably are not enough humans available to keep 1e13 robotic systems functioning, if each system occasionally needed a remote worker to log in and repair some fault. There’s also the engineering part of the challenge : these later steps require very complex systems to be designed and operated. If you have human grade AI, and the hardware to run a single human grade entity is just a few kilograms of nano-circuitry (like the actual hardware in your skull), you can create more intelligence to run the system as fast as you replicate everything else.
Isn’t life an example of self-assembling molecular nanotechnology? If life exists, then our physics allows for programmable systems which use similar processes.
We already have turing complete molecular computers… but they’re currently too slow and expensive for practical use. I predict self-assembling nanotech programmed with a library of robust modular components will happen long before strong AI.
Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas). That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it’s doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses or protein crystals everything is built more or less according to a recipe of interacting forces and emergent behavior (rather than having something like a digital blueprint).
So yeah, remarkable things are possible, most likely even including things that naturally-evolved life does not do now. But there are limits and it probably does not resemble the sorts of things described in “Nanosystems” and its ilk at all.
Was this true at the macroscale too? The jet flying over my head says “no”. Artificial designs can have different goals than living systems, and are not constrained by the need to evolve via a nearly-continuous path of incremental fitness improvements from abiogenesis-capable ancestor molecules, and this turned out to make a huge difference in what was possible.
I’m also skeptical about the extent of what may be possible, but your examples don’t really add to that skepticism. Two examples (systems that evolved from random mutations don’t have ECC to prevent random mutations; systems that evolved from aquatic origins do most of their work in aqueous solution) are actually reasons for expecting a wider range of possibilities in designed vs evolved systems; one (dynamic systems may not be statically stable) is true at the macroscale too, and one (genetic code is vastly less transparent than computer code) is a reason to expect MNT to involve very difficult problems, but not necessary a reason to expect very underwhelming solutions.
Biology didn’t evolve to take advantage of ridiculously concentrated energy sources like fossil petroleum, or to major industrial infrastructure, two things that make jets possible. This is similar to some of the reasons I think that synthetic molecular technology will probably be capable of things that biology isn’t, by taking advantage of say electricity as an energy source or one-off batch synthesis of stuff by bringing together systems that are not self-replicating from parts made separately.
In fact the analogy of a bird to a jet might be apt to describe the differences between what a synthetic system could do and what biological systems do now, due to them using different energy sources and non-self replicating components (though it might be a lot harder to brute-force such a change in quantitative performance by ridiculous application of huge amounts of energy at low efficiency).
I still suspect, however, that when you are looking at the sorts of reactions that can be done and patterns that can be made in quantities that matter as more than curiosities or rare expensive fragile demonstrations, you will be dealing with more statistical reactions than precise engineering and dynamic systems rather than static (at least during the building process) just because of the nature of matter at this scale.
edited for formatting
Please use paragraphs.
EDIT: thanks for the formatting update!
What do you make of the picture Richard Jones paints ? I’m not much more than a lay man—though happen to know my way around medicine—find his critique of of Drexler’s vision of nanotechnology sound.
His position seems to be that Drexler-style nanotechnology is theoretically possible, but that developing it would be very difficult.
A hypothetical superintelligence might find it easier...
Yes, that seems to be is main argument against Drexler’s vision, though I wonder if he thinks it’s difficult to come up with a design that would be robust, or if the kind of nanotechnology would be difficult to implement since it requires certain conditions such as vacuum close to 0 kelvin, which might be a bit cumbersome even for a superintelligence(?) unless you hang out a lot in space.
Except that Drexlerian ideas are very alien compared to life, and are also physically possible (according to Nanosystems).
You are generalizing to all of physics from the narrow band of biochemistry. Biochemistry is aqueous, solvent-based, room-temperature-range, and evolved. It is not comparable to e.g. printed circuitry on a silicon chip.
There are sure to be limits. However, the limits are probably nothing like those of life. Life is kind of useful to point to as an example of how self-replicating systems can exist, but apart from that it is a very misleading analogy. (At least, if we’re talking about hard nanotech, which is what MNT usually is used to refer to and what Drexler focuses on. Soft nanotech that mimics or borrows from biology is incredibly interesting, but different.)
He is answering someone specifically bringing up life as an example of why Drexler’s ideas are possible, and why that doesn’t actually hold.
To what extent is labeling the behavior of biological systems as “emergent” just an admission that these systems are currently mysterious to us?
I don’t think it’s clear to what extent biological systems have “emergent” behavior, vs. organization into distinct “modules” each with a specific role, and robust feedback systems.
The book chapter On Modules and Modularity in the book System Modeling in Cellular Biology argues that simple modular design is likely selected for, as it would increase the ability of an organism to evolve and adapt. Non-modular systems are so interconnected that small changes break too many things. Biological systems may be more modular (and therefore understandable) than they currently seem- but we’ll need to extend our study to look more at dynamic behavior before we’d be able to identify these modules and understand their function.
The fact that biological systems are so reliable despite high error rates in the underlying processes shows that feedback systems are an effective strategy to build robust systems from somewhat unreliable components.
I think this suggests that we may not need to solve hard problems such as protein folding before we can build practical self assembling nanotech. We “just” need a programmable library of robust modules that can be combined in arbitrary ways- but we may find that these already exist, or that they can be engineered from what we do understand (such as RNA chemistry).
I don’t seem to have the same disdain for the word ‘emergent’ as much of the population here. I don’t use it as a curiosity stopper or in place of the word ‘mysterious’ - I wouldn’t be much of a biologist if a little emergent behavior stopped me cold. (Also no argument about many modular things in biological systems, I pull out and manipulate pathways and regulatory circuits regularly in my work, but there is a whole lot which is still very context-dependent). In this context I used the word emergent to mean that rather than having some kind of map of the final structure embedded in the genetic instructions, you have them specifying the properties of small elements which then produce those larger structures only in the context of their interactions with each other and which produce a lot more structure than is actually encoded in the DNA via the rather-opaque ‘decompression algorithm’ of physics and chemistry (through which small simple changes to the elements can map to almost no change in the product or vast changes across multiple attributes). I’ve always found the analogy of genetics to a blueprint or algorithm tiresome and the analogy to a food recipe much more applicable; nothing in a recipe dictates things like, say, fluffyness other than the interactions of everything you put in in the context of an oven. You can alter biological systems in numerous ways with some regularity but only in some cases are there simple knobs you can turn to alter isolated attributes.
I mostly agree with your last two paragraphs, actually. Synthetic systems with properties similar to things like RNA or protein chemistry may eventually have a lot of power especially if they contain chemical properties not present in any of the basic building blocks of biology. They just will not have atomic-scale precision or arbitrary control over matter, and will be limited by things analogous to nutrients and metabolisms and either require a hell of a lot of functionality not directly connected to their main functions to hold themselves together or a lot of external infrastructure to make disposable things.
I really like your recipe analogy, I think it would be very useful for teaching molecular biology.
I think our discussion mirrors the tension between traditional biology and bioengineering. As a bioengineer I’m primarily concerned with what is possible to build given the biology we already know.
While I agree that a “blueprint” isn’t a good analogy for naturally evolved living organisms, this doesn’t prevent us from engineering new molecular systems that are built from a blueprint. As I mentioned, we already have turing complete molecular computers- and software compilers that can turn any code into a set of molecules that will perform the computation. It’s currently too slow and expensive to be useful, but it shows that programmable molecular systems are possible.
It’s the usual analogy I see.
Nanosystems discusses theoretical maximums. However, even if you make the assumption that living cells are as good as it gets, an e-coli, which we know from extensive analysis uses around 25,000 moving parts, can double itself in 20 minutes.
So in theory, you have some kind of nano-robotic system that is able to build stuff. Probably not any old stuff—but it could produce tiny subunits that can be assembled to make other nano-robotic systems, and other similar things.
And if it ran as fast as an e-coli, it could build itself every 20 minutes.
That’s still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.
That looks like it’s missing the point to me. As one of my physics professors put it, “we already have grey goo. It’s called bacteria.” If living cells are as good as it gets, and e-coli didn’t tear apart the Earth, that’s solid evidence that nanosystems won’t tear apart the Earth.
I’d say life is very near to as good as it gets in terms of moving around chemical energy and using it to transform materials without something like a furnace or a foundry. You’re never going to eat rock, it’s already in a pretty damn low energy state that you cannot use for energy. Lithotrophic bacteria take advantage of redox differences between materials in rocks and live REALLY slowly so that new materials can leech in. You need to apply external energy to it in order to transform it. And as TheOtherDave has said, major alterations have happened but according to rather non-grey-goo patterns, and I suspect that the sorts of large-scale (as opposed to a side-branch that some energy takes) reactions will be more similar to biological transformations than to other possibilities.
I do think that life is not necessarily as good as it gets in terms of production of interesting bulk materials or photosynthesis though because in both these cases we can take advantage of non-self-replicating (on its own) infrastructure to help things along. Imagine a tank in which electrodes coming from photovoltaics (hopefully made of something better than the current heavy-metal doped silicon that could easily be recycled or degraded or something when they inevitably photodegrade) directly drive the redox reactions that fix CO2 from the air into organic molecules, followed by the chemistry required to take that feedstock and make it into an interesting material (along with an inevitable waste product or six). Dropper in the appropriate nutrient/vitamin-analogues and let it run, then purify it… I sometimes wonder if such a system might in the long run cause an ‘ecological’ disruption by being more efficient at creating materials from simple feedstocks than regular living plants and over very long timescales crowding them out, but then there is the issue of the non-self-replicating components which add a drag. Its a very interesting and potentially strange set of scenarios to be sure, but yeah not exactly grey goo (grey sprawl?).
EDIT: Percival Zhang’s research at Virginia Tech may provide a look at some of the ideas I find particularly interesting:
Cell-free biofuel production:
http://pubs.acs.org/doi/abs/10.1021/cs200218f
Proposals for synthetic photosynthesis:
http://pubs.acs.org/doi/abs/10.1021/bk-2012-1097.ch015
http://precedings.nature.com/documents/4167/version/1
General overview:
http://www.vt.edu/spotlight/innovation/2012-02-27-fuels/zhang.html
I’d be really surprised if evolution has done all it can. We simply don’t know enough to say what might turn up in the next million years or ten million years.
Though they can alter it catastrophically.
Bacteria, as well as all life, are stuck at a local maximum because evolution cannot find optimal solutions. Part of Drexler’s work is to estimate what the theoretical optimum solutions can do.
My statement “tear apart planets” assumed too much knowledge on the part of the reader. I thought it was frankly pretty obvious. If you have a controllable piece of industrial machinery that uses electricity and can process common elements into copies of itself, but runs no faster than bacteria, tearing apart a planet is a straightforward engineering excercise. I did NOT mean the machinery looked like bacteria in any way, merely that it could copy itself no faster than bacteria.
And by “copy itself”, what I really meant is that given supplies of feedstock (bacteria need sugar, water, and a few trace elements...our “nanomachinery” would need electricity, and a supply of intermediates for every element you are working with in a pure form) it can arrange that feedstock into thousands of complex machine parts, such that the machinery that is doing this process can make it’s own mass in atomically perfect products in an hour.
I’ll leave it up to you to figure out how you could use this tech to take a planet apart in a few decades. I don’t mean a sci-fi swarm of goo, I mean an organized effort resembling a modern mine or construction site.
It’s not clear to me what you mean by “tearing apart a planet.” Are you sifting out most of the platinum and launching it into orbit? Turning it into asteroids? Rendering the atmosphere inhospitable to humans?
Because I agree that the last is obviously possible, the first probably possible, the second probably impossible without ludicrous expenditures of effort. But it’s not clear to me that any of those are things which nanotechnology would be the core enabler on.
If you mean something like “reshape the planet in its image,” then again I think bacteria are a good judge of feasibility- because of the feedstock issues. As well, it eventually becomes more profitable to prey on the nanomachines around you than the inert environment, and so soon we have an ecosystem a biologist would find familiar.
Jumping to another description, we could talk about “revolutionary technologies,” like the Haber-Bosch process, which consumes about 1% of modern energy usage and makes agriculture and industry possible on modern scales. It’s a chemical trick that extracts nitrogen from its inert state in the atmosphere and puts it into more useful forms like ammonia. Nanotech may make many tricks like that much more available and ubiquitous, but I think it will be a somewhat small addition to current biological and chemical industries, rather than a total rewriting of those fields.
This problem is very easy to solve using induction. Base step : the minimum “replicative subunit”. For life, that is usually a single cell. For nano-machinery, it is somewhat larger. For the sake of penciling in numbers, suppose you need a robot with a scoop and basic mining tools, a vacuum chamber, a 3d printer able to melt metal powder, a nanomachinery production system that is itself composed of nanomachinery, a plasma furnace, a set of pipes and tubes and storage tanks for producing the feedstock the nanomachinery needs, and a power source.
All in all, you could probably fit a single subunit into the size and mass of a greyhound bus. One notable problem is that there’s enough complexity here that current software could probably not keep a factory like this running forever because eventually something would break that it doesn’t know how to fix.
Anyways, you set down this subunit on a planet. It goes to work. In an hour, the nanomachinery subunit has made a complete copy of itself. In somewhat more time, it has to manufacture a second copy of everything else. The nanomachinery subunit makes all the high end stuff—the sensors, the circuitry, the bearings—everything complex, while the 3d printer makes all the big parts.
Pessimistically, this takes a week. A greyhound bus is 9x45 feet, and there are 5.5e15 square feet on the earth’s surface. To cover the whole planet’s surface would therefore take 44 weeks.
Now you need to do something with all the enormous piles of waste material (stuff you cannot make more subunits with) and un-needed materials. So you reallocate some of the 1.3e13 robotic systems to build electromagnetic launchers to fling the material into orbit. You also need to dispose of the atmosphere at some point, since all that air causes each electromagnetic launch to lose energy as friction, and waste heat is a huge problem. (my example isn’t entirely fair, I suspect that waste heat would cook everything before 44 weeks passed). So you build a huge number of stations that either compress the atmosphere or chemically bond the gasses to form solids.
With the vast resources in orbit, you build a sun-shade to stop all solar input to reduce the heat problem, and perhaps you build giant heat radiators in space and fling cold heat sinks to the planet or something. (with no atmospheric friction and superconductive launchers, this might work). You can also build giant solar arrays and beam microwave power down to the planet to supply the equipment so that each subunit no longer needs a nuclear reactor.
Once the earth’s crust is gone, what do you do about the rest of the planet’s mass? Knock molten globules into orbit by bombarding the planet with high energy projectiles? Build some kind of heat resistant containers that you launch into space full of lava? I don’t know. But at this point you have converted the entire earth’s crust into machines or waste piles to work with.
This is also yet another reason that AI is part of the puzzle. Even if failures were rare, there probably are not enough humans available to keep 1e13 robotic systems functioning, if each system occasionally needed a remote worker to log in and repair some fault. There’s also the engineering part of the challenge : these later steps require very complex systems to be designed and operated. If you have human grade AI, and the hardware to run a single human grade entity is just a few kilograms of nano-circuitry (like the actual hardware in your skull), you can create more intelligence to run the system as fast as you replicate everything else.