I also don’t have a stance on MNT either. If it were possible that would be great, but what would be even greater is if we could actually foresee what is truly possible within the realm of reality. At the very least, that would allow us to plan our futures.
However, I hope you won’t mind me making a counter-argument to your claims, just for sake of discussion.
EoC and Nanosystems aren’t comparable. EoC is not even a book about MNT per se, it is more a book about the impact of future technology on society (it has chapters devoted to the internet and other things—it’s also notable that he successfully predicted the rise of the internet). Nanosystems on the other hand is an engineering book. It starts out with a quantitative scaling analysis of things like magnetism, static electricity, pressure, velocity etc. at the macroscale and nanoscale and proceeds into detailed engineering computations. It is essentially like a classical text on engineering, except on the nanoscale.
As for the science presented in nanosystems, I view it as less of a ‘blueprint’ and more of a theoretical exploration of the most basic nanotechnology that is possible. For example, Drexler presents detailed plans for a nanomechanical computer. He does not make the claim that future computers will be like what he envisions. His nanomechanical computer is simply a theoretical proof-of-concept. It is there to show that computing at the nanoscale is possible. It’s unlikely that practical nanocomputers in the future (if they are possible) will look like that at all. They will probably not use mechanical principles to work.
Now about your individual arguments:
Conservation of Energy: In Nanosystems Drexler makes a lot of energy computations. However, in general, it is true that building things on the molecular level is not necessarily more energy-efficient than building them the traditional way i.e. in bulk. In fact, for many things it would probably be far less energy-efficient. It seems to me that even if MNT were possible, most things would still be made using bulk technology. MNT would only be used for the high-tech components such as computers.
Modelling is Hard: You’re talking about solving the Schrodinger equation analytically. In practice, a sufficiently-precise numerical simulation is more than adequate. In fact, ab-initio quantum simulations (simulations that make only the most modest of assumptions and compute from first principles) have been carried out for relatively large molecules. I think it is safe to assume that future computers will be able to model at least something as complicated as a nanoassembler entirely from first principles.
A factory isn’t the right analogy: I don’t understand this argument.
Chaos: You mention chaos but don’t explain why it would ruin MNT. The limiting factor in current quantum mechanical simulations is not chaotic dynamics.
The laws of physics hold: Wholeheartedly agree. However, even in the realm of current physics there is a lot of legroom. Cold fusion may be a no-no, but hot fusion is definitely doable, and there is no law of physics (that we know of) that says you can’t build a compact fusion reactor.
The simulations of molecular gears and such you find on the internet are of course fanciful. They have been done with molecular dynamics, not ab-initio simulation. You are correct that stability analysis has not been done. However, stability analysis of various diamondoid structures has been carried out, and contrary to the ‘common knowledge’ that diamond decays to graphite at the surface, defect-free passivated diamond turns out to be perfectly stable at room temperature, even in weird geometries [1]
Agree.
De novo enzymes have been created that perform functions unprecedented in the natural world [2] (this was reported in the journal Nature). Introduction of such proteins into bacteria leads to evolution and refinement of the initial structure. The question is not one of ‘doing better than biology’. It’s about technology and biology working together to achieve nanotech by any means necessary. You are correct that we are still very very far from reaching the level of mastery over organic chemistry that nature seems to have. Whether organic synthesis remains a plausible route to MNT remains to be seen.
If this is about creating single carbon atoms, you are right. However, it is not mentioned that single carbon atoms will need to exist in isolation. Carbon dimers can exist freely and in fact ab-initio simulations have shown that they can quite readly be made to react and bond with diamond surfaces [3]. I think it’s more plausible that this is what is actually meant. I don’t believe Drexler is so ignorant of basic chemistry as to have made this mistake.
I do not have enough knowledge to give an opinion on this.
I also agree that at the present there is no way to know whether such programmable machines are possible. However, they are not strictly necessary for MNT. A nanofactory would be able to achieve MNT without needing any kind of nanocomputer anywhere. Nanorobots are not necessary, so arguments refuting them do not by any means refute the possibility of MNT.
However, I hope you won’t mind me making a counter-argument to your claims, just for sake of discussion.
Pleased as punch. I’m not an authority, just getting the ball rolling.
EoC and Nanosystems aren’t comparable
Noted. Repurchased Nanosystems
Modelling is Hard: You’re talking about solving the Schrodinger equation analytically. In practice, a sufficiently-precise numerical simulation is more than adequate. In fact, ab-initio quantum simulations (simulations that make only the most modest of assumptions and compute from first principles) have been carried out for relatively large molecules. I think it is safe to assume that future computers will be able to model at least something as complicated as a nanoassembler entirely from first principles.
Haha. And what is Ab-initio? That’s a fighting word where I’m from. The point I’m striving to make here is that our “ab initio” methods are constantly being tweaked and evolved to fit experimental data. Now, we’re making mathematical approximations in the model, there aren’t any explicit empirical fitting parameters, but if an AI is going to have a hard time coming up with God’s own exchange-correlation functional then it’s not going to be able to leap-frog all the stumbling in the dark we’re doing testing different ways to cut corners. If the best ab-initio modeling algorithm the AI has is coupled-cluster or B3lyp, then I can tell you exactly how big a system it can handle, how accurately, for how many resources. That’s a really tight constraint, and I’m curious to see how it goes over. As for modelling assemblers, I can model a nanoassembler from first principles right now if you tell me where the atoms go. Of course “first principles” is up for debate, and I won’t have “chemical accuracy”. What I’m less sure about is whether I can model it interacting with it’s environment.
I also agree that at the present there is no way to know whether such programmable machines are possible. However, they are not strictly necessary for MNT. A nanofactory would be able to achieve MNT without needing any kind of nanocomputer anywhere. Nanorobots are not necessary, so arguments refuting them do not by any means refute the possibility of MNT.
Sure. If I need to dig a hole I’d rather have a shovel than a “programmable” shovel any day. But if you have a whole bunch of different tools, you’re back to the problem of how do they get to the work site in the right order. It doesn’t have the same determinism as that “programmed protein” machine.
Sure. If I need to dig a hole I’d rather have a shovel than a “programmable” shovel any day. But if you have a whole bunch of different tools, you’re back to the problem of how do they get to the work site in the right order. It doesn’t have the same determinism as that “programmed protein” machine.
Perhaps you would care to explain what you mean? I admit I’m not quite sure what your argument is here.
There’s a fundamental disconnect between a machine, and a programmable machine. A machine is presumed to do one operation, and do it well. A machine is like a shovel or a lever. It’s not unnecessarily complicated, it’s not terribly difficult to build, and it can usually work with pretty wide failure tolerances. This is why you just want a shovel and not a combination shovel/toaster when you have to dig a hole.
A programmable machine is like a computer. It is capable of performing many different operations depending on what kinds of inputs it receives. Programmable machines are complicated, difficult to construct, and can fail in both very subtle and very spectacular ways.
We can also imagine the distinction between a set of wood-working tools and a 3d-printer. A hammer is a machine. A reprap is a programmable machine.
If the question we’re trying to answer is, can we build a protein hammer, the answer is probably yes. But if we make a bunch of simple protein hammers, then we have to solve the very difficult problem of how to ensure that each tool is in the right place at the right time. A priori , there’s no molecular carpenter ensuring that those tools happen to encounter whatever we’re trying to build in any consistent order.
That’s a very different problem than the problem of “can we make a protein 3-D printer”, that has the ability to respond to complicated commands.
I’m not sure which of these situations is the one being advocated for by MNT proponents.
Again, you’re trying to argue against nanoassemblers. If you’re trying to say that nanoassemblers will be difficult to build, I entirely concede that point! If they weren’t, we’d have them already.
Nevertheless, we have today progammable machines that are built with components of nanoscopic size and are subject to weird quantum effects, that nevertheless have billions of components and work relatively smoothly. Such devices would have been thought impossible just a few decades ago. So just because something would be immensely complex is no argument for its impossibility.
However, as I said, this is all beside the point, since MNT does not strictly require nanoassemblers. A nanofactory would be built of a large set of simple tools as you describe—each tool only doing its own thing. This is much like biology where each enzyme is designed to do one thing well. However, unlike biology, the way you would go about designing a nanofactory would be similar to an assembly line. Components would be created in controlled conditions and probably high vacuum (possibly even cryogenic temperatures, especially for components with unstable or metastable intermediaries). Power would be delivered electrically or mechanically, not through ATP.
Why not just do it like biology? Well, because of different design constraints. Biological systems need to be able to grow and self-repair. Our nanofactory will have no such constraints. Instead, the focus would be on high throughput and reconfigurability. Thus necessitating a more controlled, high-power environment than the brownian diffusion-reaction processes of biology.
Great, so this I think captures a lot of the difficulty in this discussion, where there’s a lot of different opinions as to what exactly constitutes MNT. In my reading of Drexler so far, he appears to more or less believe that early Nanotech will be assembled by coopting biological asssemblers like the ribosome. That’s specifically the vision of MNT that I’ve been trying to address.
Since you seem not to believe in that view of MNT, do you have a concise description of your view of MNT that you could offer that I could add to the discussion post above? I’m particularly interested in what environment you imagine your nanoassembler operating.
To add to my reply above, one approach for discussion about the specifics of future technology is to take an approach like Nanosystems does: operate within safe limits of known technology and limit concepts to those that are more-or-less guaranteed to work, even if they are probably inefficient. In this way, even though we acknowledge that our designs could not be built today, and future technology will probably choose to build things in an entirely different way, we can still have a rough picture of what’s possible and what isn’t.
It shows an ‘assembly line for molecules’. Of course, there are many questions that are left unanswered. Energy consumption, reconfigurability, throughput. It’s unclear at all if the whole thing would actually be an improvement over current technology. For example, will this nanofactory be able to produce additional nanofactories? If not, it wouldn’t make things any cheaper or more efficient.
However, it does serve as a conceptual starting point. And indeed, small-scale versions of the technology exist right now (people have automated AFMs that are capable of producing atomic structures; people have also used AFMs to modify, break, and form chemical bonds).
there’s a lot of different opinions as to what exactly constitutes MNT
There’s two different discussions here. One is the specific form the technology will take. The other is what it will be capable of doing. About the latter, the idea is to have a technology that will be able to construct things at only marginally higher cost than that of raw materials. If MNT is possible, it will be able to turn dirt into strawberries, coal into diamonds, sand into computers and solar panels, and metal ore into rocket engines. Note that we are capable of accomplishing all of these feats right now; it’s just that they take too much time and effort. The promise of MNT and why it is so tantalizing is precisely because it promises, once functional, to reduce this time and effort substantially.
I’m more than willing to debate about the specifics of the technology, although we will both have to admit that any such discussion would be incredibly premature at this point. I don’t think a convincing case can be made right now for or against any hypothetical technology that will be able to achieve MNT.
I’m also more than willing to debate about the fundamental physical limits of construction at the nanoscale, but in that case it is much harder to refute the premise of MNT.
it’s also notable that he successfully predicted the rise of the internet
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn’t give any such timeframe for nanotechnology, I guess it’s worth mentioning.)
I also don’t have a stance on MNT either. If it were possible that would be great, but what would be even greater is if we could actually foresee what is truly possible within the realm of reality. At the very least, that would allow us to plan our futures.
However, I hope you won’t mind me making a counter-argument to your claims, just for sake of discussion.
EoC and Nanosystems aren’t comparable. EoC is not even a book about MNT per se, it is more a book about the impact of future technology on society (it has chapters devoted to the internet and other things—it’s also notable that he successfully predicted the rise of the internet). Nanosystems on the other hand is an engineering book. It starts out with a quantitative scaling analysis of things like magnetism, static electricity, pressure, velocity etc. at the macroscale and nanoscale and proceeds into detailed engineering computations. It is essentially like a classical text on engineering, except on the nanoscale.
As for the science presented in nanosystems, I view it as less of a ‘blueprint’ and more of a theoretical exploration of the most basic nanotechnology that is possible. For example, Drexler presents detailed plans for a nanomechanical computer. He does not make the claim that future computers will be like what he envisions. His nanomechanical computer is simply a theoretical proof-of-concept. It is there to show that computing at the nanoscale is possible. It’s unlikely that practical nanocomputers in the future (if they are possible) will look like that at all. They will probably not use mechanical principles to work.
Now about your individual arguments:
Conservation of Energy: In Nanosystems Drexler makes a lot of energy computations. However, in general, it is true that building things on the molecular level is not necessarily more energy-efficient than building them the traditional way i.e. in bulk. In fact, for many things it would probably be far less energy-efficient. It seems to me that even if MNT were possible, most things would still be made using bulk technology. MNT would only be used for the high-tech components such as computers.
Modelling is Hard: You’re talking about solving the Schrodinger equation analytically. In practice, a sufficiently-precise numerical simulation is more than adequate. In fact, ab-initio quantum simulations (simulations that make only the most modest of assumptions and compute from first principles) have been carried out for relatively large molecules. I think it is safe to assume that future computers will be able to model at least something as complicated as a nanoassembler entirely from first principles.
A factory isn’t the right analogy: I don’t understand this argument.
Chaos: You mention chaos but don’t explain why it would ruin MNT. The limiting factor in current quantum mechanical simulations is not chaotic dynamics.
The laws of physics hold: Wholeheartedly agree. However, even in the realm of current physics there is a lot of legroom. Cold fusion may be a no-no, but hot fusion is definitely doable, and there is no law of physics (that we know of) that says you can’t build a compact fusion reactor.
The simulations of molecular gears and such you find on the internet are of course fanciful. They have been done with molecular dynamics, not ab-initio simulation. You are correct that stability analysis has not been done. However, stability analysis of various diamondoid structures has been carried out, and contrary to the ‘common knowledge’ that diamond decays to graphite at the surface, defect-free passivated diamond turns out to be perfectly stable at room temperature, even in weird geometries [1]
Agree.
De novo enzymes have been created that perform functions unprecedented in the natural world [2] (this was reported in the journal Nature). Introduction of such proteins into bacteria leads to evolution and refinement of the initial structure. The question is not one of ‘doing better than biology’. It’s about technology and biology working together to achieve nanotech by any means necessary. You are correct that we are still very very far from reaching the level of mastery over organic chemistry that nature seems to have. Whether organic synthesis remains a plausible route to MNT remains to be seen.
If this is about creating single carbon atoms, you are right. However, it is not mentioned that single carbon atoms will need to exist in isolation. Carbon dimers can exist freely and in fact ab-initio simulations have shown that they can quite readly be made to react and bond with diamond surfaces [3]. I think it’s more plausible that this is what is actually meant. I don’t believe Drexler is so ignorant of basic chemistry as to have made this mistake.
I do not have enough knowledge to give an opinion on this.
I also agree that at the present there is no way to know whether such programmable machines are possible. However, they are not strictly necessary for MNT. A nanofactory would be able to achieve MNT without needing any kind of nanocomputer anywhere. Nanorobots are not necessary, so arguments refuting them do not by any means refute the possibility of MNT.
References:
http://www.molecularassembler.com/Papers/TarasovFeb2012.pdf
http://dx.doi.org/10.1038%2Fnature01556
http://www.molecularassembler.com/Papers/AllisHelfrichFreitasMerkle2011.pdf
Pleased as punch. I’m not an authority, just getting the ball rolling.
Noted. Repurchased Nanosystems
Haha. And what is Ab-initio? That’s a fighting word where I’m from. The point I’m striving to make here is that our “ab initio” methods are constantly being tweaked and evolved to fit experimental data. Now, we’re making mathematical approximations in the model, there aren’t any explicit empirical fitting parameters, but if an AI is going to have a hard time coming up with God’s own exchange-correlation functional then it’s not going to be able to leap-frog all the stumbling in the dark we’re doing testing different ways to cut corners. If the best ab-initio modeling algorithm the AI has is coupled-cluster or B3lyp, then I can tell you exactly how big a system it can handle, how accurately, for how many resources. That’s a really tight constraint, and I’m curious to see how it goes over. As for modelling assemblers, I can model a nanoassembler from first principles right now if you tell me where the atoms go. Of course “first principles” is up for debate, and I won’t have “chemical accuracy”. What I’m less sure about is whether I can model it interacting with it’s environment.
Sure. If I need to dig a hole I’d rather have a shovel than a “programmable” shovel any day. But if you have a whole bunch of different tools, you’re back to the problem of how do they get to the work site in the right order. It doesn’t have the same determinism as that “programmed protein” machine.
Upvoted because of this.
Perhaps you would care to explain what you mean? I admit I’m not quite sure what your argument is here.
There’s a fundamental disconnect between a machine, and a programmable machine. A machine is presumed to do one operation, and do it well. A machine is like a shovel or a lever. It’s not unnecessarily complicated, it’s not terribly difficult to build, and it can usually work with pretty wide failure tolerances. This is why you just want a shovel and not a combination shovel/toaster when you have to dig a hole.
A programmable machine is like a computer. It is capable of performing many different operations depending on what kinds of inputs it receives. Programmable machines are complicated, difficult to construct, and can fail in both very subtle and very spectacular ways.
We can also imagine the distinction between a set of wood-working tools and a 3d-printer. A hammer is a machine. A reprap is a programmable machine.
If the question we’re trying to answer is, can we build a protein hammer, the answer is probably yes. But if we make a bunch of simple protein hammers, then we have to solve the very difficult problem of how to ensure that each tool is in the right place at the right time. A priori , there’s no molecular carpenter ensuring that those tools happen to encounter whatever we’re trying to build in any consistent order.
That’s a very different problem than the problem of “can we make a protein 3-D printer”, that has the ability to respond to complicated commands.
I’m not sure which of these situations is the one being advocated for by MNT proponents.
Again, you’re trying to argue against nanoassemblers. If you’re trying to say that nanoassemblers will be difficult to build, I entirely concede that point! If they weren’t, we’d have them already.
Nevertheless, we have today progammable machines that are built with components of nanoscopic size and are subject to weird quantum effects, that nevertheless have billions of components and work relatively smoothly. Such devices would have been thought impossible just a few decades ago. So just because something would be immensely complex is no argument for its impossibility.
However, as I said, this is all beside the point, since MNT does not strictly require nanoassemblers. A nanofactory would be built of a large set of simple tools as you describe—each tool only doing its own thing. This is much like biology where each enzyme is designed to do one thing well. However, unlike biology, the way you would go about designing a nanofactory would be similar to an assembly line. Components would be created in controlled conditions and probably high vacuum (possibly even cryogenic temperatures, especially for components with unstable or metastable intermediaries). Power would be delivered electrically or mechanically, not through ATP.
Why not just do it like biology? Well, because of different design constraints. Biological systems need to be able to grow and self-repair. Our nanofactory will have no such constraints. Instead, the focus would be on high throughput and reconfigurability. Thus necessitating a more controlled, high-power environment than the brownian diffusion-reaction processes of biology.
Great, so this I think captures a lot of the difficulty in this discussion, where there’s a lot of different opinions as to what exactly constitutes MNT. In my reading of Drexler so far, he appears to more or less believe that early Nanotech will be assembled by coopting biological asssemblers like the ribosome. That’s specifically the vision of MNT that I’ve been trying to address.
Since you seem not to believe in that view of MNT, do you have a concise description of your view of MNT that you could offer that I could add to the discussion post above? I’m particularly interested in what environment you imagine your nanoassembler operating.
To add to my reply above, one approach for discussion about the specifics of future technology is to take an approach like Nanosystems does: operate within safe limits of known technology and limit concepts to those that are more-or-less guaranteed to work, even if they are probably inefficient. In this way, even though we acknowledge that our designs could not be built today, and future technology will probably choose to build things in an entirely different way, we can still have a rough picture of what’s possible and what isn’t.
For example, take this video: http://www.youtube.com/watch?v=vEYN18d7gHg
It shows an ‘assembly line for molecules’. Of course, there are many questions that are left unanswered. Energy consumption, reconfigurability, throughput. It’s unclear at all if the whole thing would actually be an improvement over current technology. For example, will this nanofactory be able to produce additional nanofactories? If not, it wouldn’t make things any cheaper or more efficient.
However, it does serve as a conceptual starting point. And indeed, small-scale versions of the technology exist right now (people have automated AFMs that are capable of producing atomic structures; people have also used AFMs to modify, break, and form chemical bonds).
There’s two different discussions here. One is the specific form the technology will take. The other is what it will be capable of doing. About the latter, the idea is to have a technology that will be able to construct things at only marginally higher cost than that of raw materials. If MNT is possible, it will be able to turn dirt into strawberries, coal into diamonds, sand into computers and solar panels, and metal ore into rocket engines. Note that we are capable of accomplishing all of these feats right now; it’s just that they take too much time and effort. The promise of MNT and why it is so tantalizing is precisely because it promises, once functional, to reduce this time and effort substantially.
I’m more than willing to debate about the specifics of the technology, although we will both have to admit that any such discussion would be incredibly premature at this point. I don’t think a convincing case can be made right now for or against any hypothetical technology that will be able to achieve MNT.
I’m also more than willing to debate about the fundamental physical limits of construction at the nanoscale, but in that case it is much harder to refute the premise of MNT.
What about using lazors? (<- intentionally signalling my lack of understanding but wanting to feel participating when the grownups talk anyway)
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn’t give any such timeframe for nanotechnology, I guess it’s worth mentioning.)