Technically the blindfold was intended to refer to the fact that you can’t make measurements on the system while you’re shaking the box because your measuring device will tend to perturb the atoms you’re manipulating.
The walls of the box that you’re using to push the legos around was intended to refer to our ability to only manipulate atoms using clumsy tools and several layers of indirection, but we’re basically on the same page.
This is also wrong. The actual proposals for MNT involve creating a system that is very stable, so you can measure it safely. The actual machinery is a bunch of parts that are as strong as they can possibly be made (this is why the usual proposals involve covalent bonded carbon aka diamond) so they are stable and you can poke them with a probe. You keep the box as cold as practical.
It’s true that even if you set everything up perfectly, there are some events that can’t be observed directly, such as bonding and rearrangements that could destroy the machine. In addition, practical MNT systems would be 3d mazes of machinery stacked on top of each other, so it would be very difficult to diagnose failures. To summarize : in a world with working MNT, there’s still lots of work that has to be done.
Building molecular nanotechnology seems to be nothing like being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Drexler proposes we make nano-scale structures in factories made of other nano-scale components. That’s a far more sensible picture.
Nothing like it? Map the atoms to individual pieces of legos, their configuration relative to each other (i.e. lining up the pegs and the holes) was intended to capture the directionality of covalent bonds. We capture forces and torques well since smaller legos tend to be easier to move, but harder to separate than larger legos. The shaking represents acting on the system via some therodynamic force. Gravity represents a tendency of things to settle into some local ground state that your shaking will have to push them away from. I think it does a pretty good job capturing some of the problems with entropy and exerted forces producing random thermal vibrations since those things are true at all length scales. The blindfold is because you aren’t Laplace’s demon, and you can’t really measure individual chemical reactions while they’re happening.
If anything, the lego system has too few degrees of freedom, and doesn’t capture the massiveness of the problem you’re dealing with because we can’t imagine a mol of lego pieces.
I try not to just throw out analogies willy-nilly. I really think that the problem of MNT is the problem of keeping track of an enormous number of pieces and interactions, and pushing them in very careful ways. I think that trying to shake a box of lego is a very reasonable human-friendly approximation of what’s going on at the nanoscale. I think my example doesn’t do a good job describing the varying strengths or types of molecular bonds, nor does it capture bond stretching or deformation in a meaningful way, but on the whole I think that saying it’s nothing like the problem of MNT is a bit too strong a statement.
The way biological nanotechnology (aka the body you are using to read this) solves this problem is it bonds the molecule being “worked on” to a larger, more stable molecule. This means instead of whole box of legos shaking around everywhere, as you put it, it’s a single lego shaking around bonded to a tool (the tool is composed of more legos, true, but it’s made of a LOT of legos connected in a way that makes it fairly stable). The tool is able to grab the other lego you want to stick to the first one, and is able to press the two together in a way that makes the bonding reaction have a low energetic barrier. The tool is shaped such that other side-reactions won’t “fit” very easily.
Anyways, a series of these reactions, and eventually you have the final product, a nice finished assembly that is glued together pretty strongly. In the final step you break the final product loose from the tool, analagous to ejecting a cast product from a mold. Check it out : http://en.wikipedia.org/wiki/Pyruvate_dehydrogenase
Note a key difference here between biological nanotech (life) and the way you described it in the OP. You need a specific toolset to create a specific final product. You CANNOT make any old molecule. However, you can build these tools from peptide chains, so if you did want another molecule you might be able to code up a new set of tools to make it. (and possibly build those tools using the tools you already have)
Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life—it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon. The idea here is to make your “lego” analogy manageable. All the “legos” in the box are glued tightly to one another (low temperature, strong covalent bonds) except for the ones you are actually playing with. No extraneous legos are allowed to enter the box (vacuum chamber)
If you want to bond a blue lego to a red lego, you force the two together in a way that controls which way they are oriented during the bonding. Check it out : http://www.youtube.com/watch?v=mY5192g1gQg
Current organic chemical synthesis DOES operate as a box of shaking legos, and this is exactly why it is very difficult to get lego models that come out without the pieces mis-bonded. http://en.wikipedia.org/wiki/Thalidomide
As for your “Shroedinger Equations are impractical to compute” : what this means is that the Lego Engineers (sorry, nanotech engineers) of the future will not be able to solve any problem in a computer alone, they’ll have to build prototypes and test them the hard way, just as it is today.
Also, this is one place where AI comes in. The universe doesn’t have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.
Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life—it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon
If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren’t much of a threat.
Also, this is one place where AI comes in. The universe doesn’t have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.
This sounds very close to a default assumption that these processes are genuinely easy to not just compute, but to actually work out what solutions one wants. Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?” It may well be that these are substantially computationally easier than we currently think. Heck, it could be that P=NP, or it could be that even with P != NP that there’s still some extremely slow growing algorithm that solves NP complete problems. But these don’t seem like likely scenarios unless one has some evidence for them.
Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?”
Got a reference for that? It’s not obvious to me (CS background, not bio).
What if you have an algorithm that attempts to solve the “how will this protein most likely fold?” problem, but is only tractable on 1% of possible inputs, and just gives up on the other 99%? As long as the 1% contains enough interesting structures, it’ll still work as a subroutine for the “what protein will fold like this?” problem. The search algorithm just has to avoid the proteins that it doesn’t know how to evaluate. That’s how human engineers work, anyway: “what does this pile of spaghetti code do?” is uncomputable in the worst case, but that doesn’t stop programmers from solving “write a program that does X”.
Sure, see for example here which discusses some of the issues involved. Although your essential point may still have merit, because it is likely that many of the proteins we would want will have much more restricted shapes than those in general problem. Also, I don’t know much about what work has been done in the last few years, so it is possible that the state of the art has changed substantially.
Sure, but a lot of the hypothetical nanotech disasters are things that require nanotech devices that are themselves very small (e.g. the grey goo scenarios). If one requires a macroscopic object to keep a stable vacuum then the set of threats goes down by a lot. Obviously a lot of them are still possibly present (such as the possibility that almost anyone will be able to refine uranium), but many of them don’t, and many of the obvious scenarios connected to AI would then look less likely.
The blindfold refers to our ability to manipulate atoms in complicated sstructures only through several layers of indirection.
Technically the blindfold was intended to refer to the fact that you can’t make measurements on the system while you’re shaking the box because your measuring device will tend to perturb the atoms you’re manipulating.
The walls of the box that you’re using to push the legos around was intended to refer to our ability to only manipulate atoms using clumsy tools and several layers of indirection, but we’re basically on the same page.
This is also wrong. The actual proposals for MNT involve creating a system that is very stable, so you can measure it safely. The actual machinery is a bunch of parts that are as strong as they can possibly be made (this is why the usual proposals involve covalent bonded carbon aka diamond) so they are stable and you can poke them with a probe. You keep the box as cold as practical.
It’s true that even if you set everything up perfectly, there are some events that can’t be observed directly, such as bonding and rearrangements that could destroy the machine. In addition, practical MNT systems would be 3d mazes of machinery stacked on top of each other, so it would be very difficult to diagnose failures. To summarize : in a world with working MNT, there’s still lots of work that has to be done.
Building molecular nanotechnology seems to be nothing like being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Drexler proposes we make nano-scale structures in factories made of other nano-scale components. That’s a far more sensible picture.
Nothing like it? Map the atoms to individual pieces of legos, their configuration relative to each other (i.e. lining up the pegs and the holes) was intended to capture the directionality of covalent bonds. We capture forces and torques well since smaller legos tend to be easier to move, but harder to separate than larger legos. The shaking represents acting on the system via some therodynamic force. Gravity represents a tendency of things to settle into some local ground state that your shaking will have to push them away from. I think it does a pretty good job capturing some of the problems with entropy and exerted forces producing random thermal vibrations since those things are true at all length scales. The blindfold is because you aren’t Laplace’s demon, and you can’t really measure individual chemical reactions while they’re happening.
If anything, the lego system has too few degrees of freedom, and doesn’t capture the massiveness of the problem you’re dealing with because we can’t imagine a mol of lego pieces.
I try not to just throw out analogies willy-nilly. I really think that the problem of MNT is the problem of keeping track of an enormous number of pieces and interactions, and pushing them in very careful ways. I think that trying to shake a box of lego is a very reasonable human-friendly approximation of what’s going on at the nanoscale. I think my example doesn’t do a good job describing the varying strengths or types of molecular bonds, nor does it capture bond stretching or deformation in a meaningful way, but on the whole I think that saying it’s nothing like the problem of MNT is a bit too strong a statement.
The way biological nanotechnology (aka the body you are using to read this) solves this problem is it bonds the molecule being “worked on” to a larger, more stable molecule. This means instead of whole box of legos shaking around everywhere, as you put it, it’s a single lego shaking around bonded to a tool (the tool is composed of more legos, true, but it’s made of a LOT of legos connected in a way that makes it fairly stable). The tool is able to grab the other lego you want to stick to the first one, and is able to press the two together in a way that makes the bonding reaction have a low energetic barrier. The tool is shaped such that other side-reactions won’t “fit” very easily.
Anyways, a series of these reactions, and eventually you have the final product, a nice finished assembly that is glued together pretty strongly. In the final step you break the final product loose from the tool, analagous to ejecting a cast product from a mold. Check it out : http://en.wikipedia.org/wiki/Pyruvate_dehydrogenase
Note a key difference here between biological nanotech (life) and the way you described it in the OP. You need a specific toolset to create a specific final product. You CANNOT make any old molecule. However, you can build these tools from peptide chains, so if you did want another molecule you might be able to code up a new set of tools to make it. (and possibly build those tools using the tools you already have)
Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life—it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon. The idea here is to make your “lego” analogy manageable. All the “legos” in the box are glued tightly to one another (low temperature, strong covalent bonds) except for the ones you are actually playing with. No extraneous legos are allowed to enter the box (vacuum chamber)
If you want to bond a blue lego to a red lego, you force the two together in a way that controls which way they are oriented during the bonding. Check it out : http://www.youtube.com/watch?v=mY5192g1gQg
Current organic chemical synthesis DOES operate as a box of shaking legos, and this is exactly why it is very difficult to get lego models that come out without the pieces mis-bonded. http://en.wikipedia.org/wiki/Thalidomide
As for your “Shroedinger Equations are impractical to compute” : what this means is that the Lego Engineers (sorry, nanotech engineers) of the future will not be able to solve any problem in a computer alone, they’ll have to build prototypes and test them the hard way, just as it is today.
Also, this is one place where AI comes in. The universe doesn’t have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.
If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren’t much of a threat.
This sounds very close to a default assumption that these processes are genuinely easy to not just compute, but to actually work out what solutions one wants. Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?” It may well be that these are substantially computationally easier than we currently think. Heck, it could be that P=NP, or it could be that even with P != NP that there’s still some extremely slow growing algorithm that solves NP complete problems. But these don’t seem like likely scenarios unless one has some evidence for them.
Got a reference for that? It’s not obvious to me (CS background, not bio).
What if you have an algorithm that attempts to solve the “how will this protein most likely fold?” problem, but is only tractable on 1% of possible inputs, and just gives up on the other 99%? As long as the 1% contains enough interesting structures, it’ll still work as a subroutine for the “what protein will fold like this?” problem. The search algorithm just has to avoid the proteins that it doesn’t know how to evaluate. That’s how human engineers work, anyway: “what does this pile of spaghetti code do?” is uncomputable in the worst case, but that doesn’t stop programmers from solving “write a program that does X”.
Sure, see for example here which discusses some of the issues involved. Although your essential point may still have merit, because it is likely that many of the proteins we would want will have much more restricted shapes than those in general problem. Also, I don’t know much about what work has been done in the last few years, so it is possible that the state of the art has changed substantially.
The idea is to have a vacuum inside the machinery, a macroscopic nanofactory can still exist in an atmosphere.
Sure, but a lot of the hypothetical nanotech disasters are things that require nanotech devices that are themselves very small (e.g. the grey goo scenarios). If one requires a macroscopic object to keep a stable vacuum then the set of threats goes down by a lot. Obviously a lot of them are still possibly present (such as the possibility that almost anyone will be able to refine uranium), but many of them don’t, and many of the obvious scenarios connected to AI would then look less likely.
I don’t know.. I think ‘grey goo’ scenarios would still work even if the individual goolets are insect-sized.