I did my PhD doing STM. So on the one hand, yes, you’re right. I know first-hand how to prepare an environment where gas molecules are hitting any given point on your sample less than once per week. But if you’re trying to work with stuff that’s highly highly reactive (particularly to hydrogen gas) it will get dirty anyway, and after a 1% of a week it will be ~1% covered in dirt.
I also know how sticky, erratic, and overall annoying to work with atoms are. Low temperatures will usually not save you because there will be multiple metastable configurations of most non-crystalline blobs of atoms and anything interesting is locally-energetic-enough to excite some of those transitions. If you try to grab atoms from feedstock along pre-planned trajectories, it will fail, because you will non-deterministically pick up clusters of atoms of different sizes. If you try to replace a bulk feedstock with a molecular ratchet to dispense single atoms, it will fail, because the ratchet will run backwards sometimes and not dispense an atom when you want it. And then your atom-grabber will stick itself to something important one a day because why not.
I feel like this is a mechanical engineer complaining about friction and bearings seizing up. Those are real phenomena, but we have ways of keeping them in control, and they make engineering a bit harder, not impossible.
If you want a nano-mechanical ratchet that will never thermally reverse, the obvious trick is to drive it with lots of energy. The probability of reversal goes down exponentially with energy, so expend 20x the typical thermal energy, and the chance of brownian motion knocking it backwards is basically 0.
There are more subtle things you can do. For example there is the unreliable train model, it goes forwards more often than it goes backwards, and the tracks only lead one place, so eventually it gets to its destination. If the whole nanomachine sometimes runs backwards in sync, thats not a problem, its only parts running backwards that’s a problem. I suspect there is something really fancy involving uncomputing compression algorithms on thermal noise.
These are engineering problems, with engineering solutions. Not fundamental physical rules prohibiting nanotech.
Well, sometimes when mechanical engineers warn you about friction (or about your building materials not being suitable for the structure you’ve dreamed up), they’re right. I think the whole “print single atoms into structures of more than 200 atoms” paradigm is a dead end for reasons that can basically be described as “warnings about atoms as a building material.”
An analogy would be a robot that can pick up a deck of cards and shuffle them, without sensors. And the robot is built out of Jell-O.
What is noticeably different between these 2 worlds, the world where that kind of nanotech is just a fairly tricky engineering problem, and the world where nanotech is impossible? In both worlds, you struggled to get a handful of atoms to behave, just in one world you are battling the limitations of the crude current tools, and in the other world, what you are battling is a mixture of current technical limits and fundamental limits.
“Iron is an inherently lumpy material. When a bloom comes out a bloomery its made of loads of lumps all stuck together. And each batch is a bit different. You can try to hammer it into shape, but the more you hammer it, the more grit comes off your hammer and the more the iron forms an ore-like layer on its surface. The only thing that can hammer iron is rocks, and most rocks can’t be made into a sharp point. Flint can, but if you hit it too hard it will flake off. These gear things are impossible, you can never make them precise enough to actually fit together. Do you have any idea how hard it is to make iron into any usable tool at all. These aren’t just engineering details, these are fundamental limitations of the building material.”
Sure. But us humans are never guaranteed an “aha” moment that lets us distinguish the two. If you have no physics-level guarantee that your technology idea will be useful, and no physics-level argument for why it won’t, then you may for a long time occupy the epistemic state of “I’m sure that almost this exact idea is good, it just has all these inconvenient engineering-type problems that make our current designs fail to work. But surely we’ll figure out how to bypass all these mere engineering problems without reevaluating the basic design.”
In this case, we face a situation of uncertainty. Two biases here dominate our thinking on tech:
Optimism bias. We are unduly optimistic about when certain things will happen, especially in the short term. Pessimists are usually right that timelines to technology take longer than you think. At this point, the evidence is telling us that nanotechnology is not a simple trick or will happen easily, but it doesn’t mean that it’s outright impossible. The most we can say is that it’s difficult to make progress.
Conjunction fallacy. People imagine routes to technology as conjunctive, when they are usually disjunctive. This is where pessimists around possibilities are usually wrong. In order to prove Drexler wrong, you’d have to show why any path to nanotechnology has fundamental problems, and you didn’t do this. At best you’ve shown STM has massive problems. (And maybe not even that.)
So my prior is that Nanotechnology is possible, but it will take much longer than people think.
Charlie, you obviously have expert level knowledge on this.
Are you saying you ultimately concluded that:
(1) nanoassembly machinery won’t work, in the problem space of “low temperatures, clean vacuum, feedstock bonding to target”. Obviously nanoassembly machinery works fine at liquid water temperatures, in solvent, for specific molecules.
(2) it would be too difficult for you or a reasonable collection of research labs working together to solve the problems or make any meaningful progress. That a single piece of actually working nano-machinery would be so complex and fragile to build with STMs that you basically couldn’t do it, it would be like your robot with jello hands example.
I will note that you could shuffle cards blind most of the time if you’re allowed to harden up the robot and get a really accurate model of how jello physics work.
I have expert-level knowledge on something, but probably not this precise thing. As for 1 or 2, it’s a bit complicated.
Let me start by defending 1. The paradigm of “atom by atom 3D printing of covalently-bonded materials” just has too many problems to actually work in the way Drexler envisioned, AFAICT. Humans might be able to build nanomachinery that 3D prints covalent bonds for a while before it breaks. But that is the jello robot, and no matter how optimized you make a jello robot, it’s not really going to work.
But even though I say that, maybe that’s a bit hypoerbolic. A superintelligent AI (or maybe even humans with plenty of time and too many resources) could probably solve a lot of those problems in a way that keeps the same general aesthetic. The easiest problem is vacuum. We could do better than 10^-13 Torr if we really wanted to, it’s just that every step is expensive, slow, and makes it harder to do experimentation. If you make the vacuum five orders of magnitude better, and cool everything down to a few microKelvin, that would mean you can let all of your steps take a lot longer, which loosens a lot of the constraints based around hysteresis or energy getting dumped into the system by your actuators.
Could a superintelligence solve all the problems? Well, there are likely problems with state transitions that are still an issue even at low temperature due to quantum tunneling. I suspect a superintelligent AI (or a human with a sufficiently powerful computer) could solve these problems, but I’m not confident that they would do so in ways that keep the “3D printing aesthetic” intact.
So I subscribe to 1 in the strict sense of “the exact things Drexler is talking about will have problems.” I subscribe to 2 in the sense of “Trying to fix this without changing the design philosophy will be really hard, if it’s possible.” And I want to point at some third thing like “A superintelligent AI trying to produce the same output would probably do it in a way that looks different than this.”
What 3d printing aesthetic? I understand the core step of drexelerian nanoassembly is a target molecule is being physically held in basically a mechanical jig. And feedstock gas is introduced—obviously filtered enough it’s pure element wise though nanotechnology only operates on electron cloud identity like all chemistry—is introduced to the target mechanically. The feedstock molecules are chosen where bonding is energetically favorable. A chemical bond happens, and the new molecule is sent somewhere else in the factory.
The key note is that the proposal is to use machinery to completely enclose and limit the chemistry to the reaction you wanted. And the machinery doing this is specialized—it immediately starts working on the exact same bonding step again. It’s similar to how nature does it except that biological enzymes are floppy, which lets errors happen, and they rely on the properties of water to “cage” molecules and otherwise act as part of the chemistry, vs the drexler way would require an actual physical tool to be robotically pressed into place, forcing there to be exactly one possible bond.
Did you read his books? I skimmed them and recall no discussion of direct printing, chemistry can’t do that.
So a nanoforge at a higher level is all these assembly lines that produce exactly one product. The larger molecules being worked on can be sent down variant paths and at the larger subsystem and robotic machinery assembly levels there are choices. At the point there are choices these are big subassemblies of hundreds of Daltons, just like how nature strings peptides out of fairly bulky amino acids.
Primarily though you should realize that while a nanoforge would be a colossal machine made of robotics, it can only make this limited “menu” of molecules and robotic parts, and in turn almost all these parts are used in itself. When it isn’t copying itself, it can make products, but those products are all just remixes from this limited menu.
It’s not an entirely blind process, robotic assembly stations can sense if a large molecule is there, and they are shaped to fit only one molecule, so factory logic including knowing if a line is “dead” is possible. (Dead lines can’t be repaired so you have to be able to route copies of what they were producing from other lines, and this slows the whole nanoforge down as it ‘ages’ - it has to construct another complete nanoforge before something critical fails and it ceases to function).
Similarly other forms of errors may be reportable.
What I like about the nanoforge hypothesis is that we can actually construct fairly simply programmatic goals for a super intelligent narrow AI to follow to describe what this machine is, as well as a whole tree of subtotals*. For every working nanoforge there is this immense combinatorial space of designs that won’t work, and this is recursively true down to the smallest discrete parts, as an optimization problem there is a lot of coupling. The small molecule level robotic assembly stations need to reuse as many parts as possible between them, because this shrinks the size and complexity of the the overall machine for instance.
This doesn’t subdivide well between design teams of humans.
Another coupling example: suppose you discover a way to construct an electric motor at the nanoscale and it scores the best on a goal heuristic, after years of work.
You then find it can’t be integrated into the housing another team was working on.
For an AI this is not a major problem—you simply need to remember the 1 million other motor and housing candidates you designed in simulation and begin combinatorially checking how they match up. In fact you never really commit to a possibility but always just maintain lists of possibilities as you work towards the end goal.
I have seen human teams at Intel do this but they would have a list length of 2. “If this doesn’t work here’s the backup”.
Right, by 3d printing I mean the individual steps of adding atoms at precise locations.
Like in the video you linked elsewhere—acetylene is going to leak through the seal, or it’s going to dissociate from where it’s supposed to sit, and then it’s going to at best get adsorbed onto your machinery before getting very slowly pumped out. But even adsorbed gas changes the local electron density, which changes how atoms bond.
The machinery may sense when it’s totally gummed up, but it can’t sense if unluckily adsorbed gas has changed the location of the carbon atoms it’s holding by 10 pm, introducing a small but unacceptable probability of failing to bond, or bonding to the wrong site. And downstream, atoms in the wrong place means higher chance of the machinery bonding to the product, then ripping atoms off of both when the machinery keeps moving.
Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.
Countless rare probability events will cause a failure. Machinery has to do sufficient work during it’s lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.
The current hypothesis for life is that very poor quality replicators—basically naked RNA—evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade. This is laboratory verified.
So far I haven’t really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :
Human level narrow AI → general purpose robotics → general purpose robotic assembly at macroscale → self replicating macroscale robotics → narrow AI research systems → very large scale research complexes operated by narrow AI.
The fundamental research algorithm is this:
The AI needs a simulation to determine if a candidate design is likely to work or not. So the pipeline is
This is recursive of course, you predict n frames in advance by using the prior predicted frames.
The way an AI can do science is the following:
(1) identify simulation environment frames of interest to the task of it’s end goal with high uncertainty
(2) propose experiments to reduce uncertainty
(3) sort the experiments by a heuristic of cost, information gain
Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.
All experiments are obviously robotic, ideally with heterogeneous equipment. (different brand of robot, different apparatus, different facility, different funding source, different software stack)
Anyways that’s how you unlock nanoforges—build thousands or millions of STMs and investigate this in parallel. Likely not achievable without the dependent tech nodes above.
The current model is that individual research groups have what, 1-10 STMs? A small team of a few grad students? And they encrypt their results in a “research paper” deliberately designed to be difficult for humans to read even if they are well educated? So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others. Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.
This doesn’t work. It only worked for lower hanging fruit. It’s the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth’s industrial chain into a small box.
I did my PhD doing STM. So on the one hand, yes, you’re right. I know first-hand how to prepare an environment where gas molecules are hitting any given point on your sample less than once per week. But if you’re trying to work with stuff that’s highly highly reactive (particularly to hydrogen gas) it will get dirty anyway, and after a 1% of a week it will be ~1% covered in dirt.
I also know how sticky, erratic, and overall annoying to work with atoms are. Low temperatures will usually not save you because there will be multiple metastable configurations of most non-crystalline blobs of atoms and anything interesting is locally-energetic-enough to excite some of those transitions. If you try to grab atoms from feedstock along pre-planned trajectories, it will fail, because you will non-deterministically pick up clusters of atoms of different sizes. If you try to replace a bulk feedstock with a molecular ratchet to dispense single atoms, it will fail, because the ratchet will run backwards sometimes and not dispense an atom when you want it. And then your atom-grabber will stick itself to something important one a day because why not.
I feel like this is a mechanical engineer complaining about friction and bearings seizing up. Those are real phenomena, but we have ways of keeping them in control, and they make engineering a bit harder, not impossible.
If you want a nano-mechanical ratchet that will never thermally reverse, the obvious trick is to drive it with lots of energy. The probability of reversal goes down exponentially with energy, so expend 20x the typical thermal energy, and the chance of brownian motion knocking it backwards is basically 0.
There are more subtle things you can do. For example there is the unreliable train model, it goes forwards more often than it goes backwards, and the tracks only lead one place, so eventually it gets to its destination. If the whole nanomachine sometimes runs backwards in sync, thats not a problem, its only parts running backwards that’s a problem. I suspect there is something really fancy involving uncomputing compression algorithms on thermal noise.
These are engineering problems, with engineering solutions. Not fundamental physical rules prohibiting nanotech.
Well, sometimes when mechanical engineers warn you about friction (or about your building materials not being suitable for the structure you’ve dreamed up), they’re right. I think the whole “print single atoms into structures of more than 200 atoms” paradigm is a dead end for reasons that can basically be described as “warnings about atoms as a building material.”
An analogy would be a robot that can pick up a deck of cards and shuffle them, without sensors. And the robot is built out of Jell-O.
What is noticeably different between these 2 worlds, the world where that kind of nanotech is just a fairly tricky engineering problem, and the world where nanotech is impossible? In both worlds, you struggled to get a handful of atoms to behave, just in one world you are battling the limitations of the crude current tools, and in the other world, what you are battling is a mixture of current technical limits and fundamental limits.
“Iron is an inherently lumpy material. When a bloom comes out a bloomery its made of loads of lumps all stuck together. And each batch is a bit different. You can try to hammer it into shape, but the more you hammer it, the more grit comes off your hammer and the more the iron forms an ore-like layer on its surface. The only thing that can hammer iron is rocks, and most rocks can’t be made into a sharp point. Flint can, but if you hit it too hard it will flake off. These gear things are impossible, you can never make them precise enough to actually fit together. Do you have any idea how hard it is to make iron into any usable tool at all. These aren’t just engineering details, these are fundamental limitations of the building material.”
Sure. But us humans are never guaranteed an “aha” moment that lets us distinguish the two. If you have no physics-level guarantee that your technology idea will be useful, and no physics-level argument for why it won’t, then you may for a long time occupy the epistemic state of “I’m sure that almost this exact idea is good, it just has all these inconvenient engineering-type problems that make our current designs fail to work. But surely we’ll figure out how to bypass all these mere engineering problems without reevaluating the basic design.”
In this case, we face a situation of uncertainty. Two biases here dominate our thinking on tech:
Optimism bias. We are unduly optimistic about when certain things will happen, especially in the short term. Pessimists are usually right that timelines to technology take longer than you think. At this point, the evidence is telling us that nanotechnology is not a simple trick or will happen easily, but it doesn’t mean that it’s outright impossible. The most we can say is that it’s difficult to make progress.
Conjunction fallacy. People imagine routes to technology as conjunctive, when they are usually disjunctive. This is where pessimists around possibilities are usually wrong. In order to prove Drexler wrong, you’d have to show why any path to nanotechnology has fundamental problems, and you didn’t do this. At best you’ve shown STM has massive problems. (And maybe not even that.)
So my prior is that Nanotechnology is possible, but it will take much longer than people think.
Charlie, you obviously have expert level knowledge on this.
Are you saying you ultimately concluded that:
(1) nanoassembly machinery won’t work, in the problem space of “low temperatures, clean vacuum, feedstock bonding to target”. Obviously nanoassembly machinery works fine at liquid water temperatures, in solvent, for specific molecules.
(2) it would be too difficult for you or a reasonable collection of research labs working together to solve the problems or make any meaningful progress. That a single piece of actually working nano-machinery would be so complex and fragile to build with STMs that you basically couldn’t do it, it would be like your robot with jello hands example.
I will note that you could shuffle cards blind most of the time if you’re allowed to harden up the robot and get a really accurate model of how jello physics work.
I have expert-level knowledge on something, but probably not this precise thing. As for 1 or 2, it’s a bit complicated.
Let me start by defending 1. The paradigm of “atom by atom 3D printing of covalently-bonded materials” just has too many problems to actually work in the way Drexler envisioned, AFAICT. Humans might be able to build nanomachinery that 3D prints covalent bonds for a while before it breaks. But that is the jello robot, and no matter how optimized you make a jello robot, it’s not really going to work.
But even though I say that, maybe that’s a bit hypoerbolic. A superintelligent AI (or maybe even humans with plenty of time and too many resources) could probably solve a lot of those problems in a way that keeps the same general aesthetic. The easiest problem is vacuum. We could do better than 10^-13 Torr if we really wanted to, it’s just that every step is expensive, slow, and makes it harder to do experimentation. If you make the vacuum five orders of magnitude better, and cool everything down to a few microKelvin, that would mean you can let all of your steps take a lot longer, which loosens a lot of the constraints based around hysteresis or energy getting dumped into the system by your actuators.
Could a superintelligence solve all the problems? Well, there are likely problems with state transitions that are still an issue even at low temperature due to quantum tunneling. I suspect a superintelligent AI (or a human with a sufficiently powerful computer) could solve these problems, but I’m not confident that they would do so in ways that keep the “3D printing aesthetic” intact.
So I subscribe to 1 in the strict sense of “the exact things Drexler is talking about will have problems.” I subscribe to 2 in the sense of “Trying to fix this without changing the design philosophy will be really hard, if it’s possible.” And I want to point at some third thing like “A superintelligent AI trying to produce the same output would probably do it in a way that looks different than this.”
What 3d printing aesthetic? I understand the core step of drexelerian nanoassembly is a target molecule is being physically held in basically a mechanical jig. And feedstock gas is introduced—obviously filtered enough it’s pure element wise though nanotechnology only operates on electron cloud identity like all chemistry—is introduced to the target mechanically. The feedstock molecules are chosen where bonding is energetically favorable. A chemical bond happens, and the new molecule is sent somewhere else in the factory.
The key note is that the proposal is to use machinery to completely enclose and limit the chemistry to the reaction you wanted. And the machinery doing this is specialized—it immediately starts working on the exact same bonding step again. It’s similar to how nature does it except that biological enzymes are floppy, which lets errors happen, and they rely on the properties of water to “cage” molecules and otherwise act as part of the chemistry, vs the drexler way would require an actual physical tool to be robotically pressed into place, forcing there to be exactly one possible bond.
Did you read his books? I skimmed them and recall no discussion of direct printing, chemistry can’t do that.
So a nanoforge at a higher level is all these assembly lines that produce exactly one product. The larger molecules being worked on can be sent down variant paths and at the larger subsystem and robotic machinery assembly levels there are choices. At the point there are choices these are big subassemblies of hundreds of Daltons, just like how nature strings peptides out of fairly bulky amino acids.
Primarily though you should realize that while a nanoforge would be a colossal machine made of robotics, it can only make this limited “menu” of molecules and robotic parts, and in turn almost all these parts are used in itself. When it isn’t copying itself, it can make products, but those products are all just remixes from this limited menu.
It’s not an entirely blind process, robotic assembly stations can sense if a large molecule is there, and they are shaped to fit only one molecule, so factory logic including knowing if a line is “dead” is possible. (Dead lines can’t be repaired so you have to be able to route copies of what they were producing from other lines, and this slows the whole nanoforge down as it ‘ages’ - it has to construct another complete nanoforge before something critical fails and it ceases to function).
Similarly other forms of errors may be reportable.
What I like about the nanoforge hypothesis is that we can actually construct fairly simply programmatic goals for a super intelligent narrow AI to follow to describe what this machine is, as well as a whole tree of subtotals*. For every working nanoforge there is this immense combinatorial space of designs that won’t work, and this is recursively true down to the smallest discrete parts, as an optimization problem there is a lot of coupling. The small molecule level robotic assembly stations need to reuse as many parts as possible between them, because this shrinks the size and complexity of the the overall machine for instance.
This doesn’t subdivide well between design teams of humans.
Another coupling example: suppose you discover a way to construct an electric motor at the nanoscale and it scores the best on a goal heuristic, after years of work.
You then find it can’t be integrated into the housing another team was working on.
For an AI this is not a major problem—you simply need to remember the 1 million other motor and housing candidates you designed in simulation and begin combinatorially checking how they match up. In fact you never really commit to a possibility but always just maintain lists of possibilities as you work towards the end goal.
I have seen human teams at Intel do this but they would have a list length of 2. “If this doesn’t work here’s the backup”.
Right, by 3d printing I mean the individual steps of adding atoms at precise locations.
Like in the video you linked elsewhere—acetylene is going to leak through the seal, or it’s going to dissociate from where it’s supposed to sit, and then it’s going to at best get adsorbed onto your machinery before getting very slowly pumped out. But even adsorbed gas changes the local electron density, which changes how atoms bond.
The machinery may sense when it’s totally gummed up, but it can’t sense if unluckily adsorbed gas has changed the location of the carbon atoms it’s holding by 10 pm, introducing a small but unacceptable probability of failing to bond, or bonding to the wrong site. And downstream, atoms in the wrong place means higher chance of the machinery bonding to the product, then ripping atoms off of both when the machinery keeps moving.
Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.
Countless rare probability events will cause a failure. Machinery has to do sufficient work during it’s lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.
The current hypothesis for life is that very poor quality replicators—basically naked RNA—evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade. This is laboratory verified.
So far I haven’t really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :
Human level narrow AI → general purpose robotics → general purpose robotic assembly at macroscale → self replicating macroscale robotics → narrow AI research systems → very large scale research complexes operated by narrow AI.
The fundamental research algorithm is this:
The AI needs a simulation to determine if a candidate design is likely to work or not. So the pipeline is
(sim frame) → engine stage 1 → neural network engine → predicted frames, uncertainty
This is recursive of course, you predict n frames in advance by using the prior predicted frames.
The way an AI can do science is the following:
(1) identify simulation environment frames of interest to the task of it’s end goal with high uncertainty
(2) propose experiments to reduce uncertainty
(3) sort the experiments by a heuristic of cost, information gain
Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.
All experiments are obviously robotic, ideally with heterogeneous equipment. (different brand of robot, different apparatus, different facility, different funding source, different software stack)
Anyways that’s how you unlock nanoforges—build thousands or millions of STMs and investigate this in parallel. Likely not achievable without the dependent tech nodes above.
The current model is that individual research groups have what, 1-10 STMs? A small team of a few grad students? And they encrypt their results in a “research paper” deliberately designed to be difficult for humans to read even if they are well educated? So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others. Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.
This doesn’t work. It only worked for lower hanging fruit. It’s the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth’s industrial chain into a small box.