Did Drexler have a mechanism for his 30 year projection?
Let me give an example of the mechanism. AGI is likely available within 10 years, where AGI is defined as a “machine that can perform, to objective standards, at least as well as the average human on a set of tasks large enough to be about the size of the task-space the average human can perform”.
So the mechanism is: (1) large institutions create a large benchmark set (see Big Bench) of automatically gradable tasks that are about as difficult as tasks humans can perform (2) large institutions test out AGI candidate architectures, ideally designed by prior AGI candidates, on this benchmark. (3) score on the benchmark > average human? You have AGI.
This is clearly doable and just a matter of scale.
Contrast this to nanotechnology.
See here:
A nanoassembler is a vast parallel set of assembly lines run by nanomachinery robotics. So you have to develop at a minimum, gears and dies and sensors and motors and logic and conveyer lines and bearings and...
Note that reliability needs to be very high: a side reaction that causes an unwanted bond will in many cases “kill” that assembly line. Large scale “practical” nano-assemblers will need redundancy.
And at a minimum you need to get at least one functioning subsystem of the nano-machinery to work to start to produce things to begin to bootstrap. Also you are simply burning money until you basically get a mostly or fully functioning nanoassembler—the cost to manipulate atoms is very high doing it with conventional methods, things like STM microscopes that each are very expensive and can only move usually a single head around.
The problem statement is “build a machinery set of complex parts able to produce every part used in itself”.
So it’s this enormous upfront investment, it looks nothing like anything people have made before, we can’t even “see” what we are doing, and the machinery needed is very very complicated and made using methods people have never used before.
Note that we don’t have macroscale assemblers. 3d printers can’t copy themselves, nothing can. We fill in the gaps in our industrial manufacturing lines with humans who do the steps robots aren’t doing. Humans can’t be used to gap-fill at the nanoscale.
I don’t see a basis for the “30 year” estimate. The Manhattan project was “let’s purify an isotope we know exists so it will chain react, and let’s make another fissionable by exposing it to a neutron flux from the first reactor”. There were a lot of difficulties but it was a straightforward, reasonably simple thing to do. “purify this isotope”, “let nature make this other element (plutonium)”, “purify that”.
There were several methods tried for each step and as it so happened almost everything eventually worked.
Arguably if you think seriously about what a nanoassembler will require the answer’s obvious. You need superintelligence—a form of trees of narrow AI that can learn from a million experiments in parallel, have subgoals to find routes to a nanoassembler, run millions of robotic STM stations in parallel, and systematically find ways around the problems.
The sheer amount of data and decisions that would have to be made to produce a working nanoassembler is likely simply beyond human intelligence at all and always was. You might need millions or billions of people or more to do it with meatware.
The “mechanism” as I describe it is succinctly, ’what Google is already doing, but 1-3 orders of magnitdue higher”. Gato solves ~200 tasks to human level. How many tasks does the average human learn to do competently in their lifetime? 2000? 20k? 200k?
It simply doesn’t matter which it is, all are are within the space of “could plausibly solve within 10 years”.
Whatever it is, it’s bounded, and likely the same architecture can be extended to handle all the tasks. I mention bootstrapping (because ‘writing software to solve a prompt’ is a task, and ‘designing an AI model to do well on an AGI task’ is a task) because it’s the obvious way to get a huge boost in performance to solve this problem quickly.
Did Drexler have a mechanism for his 30 year projection?
Let me give an example of the mechanism. AGI is likely available within 10 years, where AGI is defined as a “machine that can perform, to objective standards, at least as well as the average human on a set of tasks large enough to be about the size of the task-space the average human can perform”.
So the mechanism is: (1) large institutions create a large benchmark set (see Big Bench) of automatically gradable tasks that are about as difficult as tasks humans can perform (2) large institutions test out AGI candidate architectures, ideally designed by prior AGI candidates, on this benchmark. (3) score on the benchmark > average human? You have AGI.
This is clearly doable and just a matter of scale.
Contrast this to nanotechnology.
See here:
A nanoassembler is a vast parallel set of assembly lines run by nanomachinery robotics. So you have to develop at a minimum, gears and dies and sensors and motors and logic and conveyer lines and bearings and...
Note that reliability needs to be very high: a side reaction that causes an unwanted bond will in many cases “kill” that assembly line. Large scale “practical” nano-assemblers will need redundancy.
And at a minimum you need to get at least one functioning subsystem of the nano-machinery to work to start to produce things to begin to bootstrap. Also you are simply burning money until you basically get a mostly or fully functioning nanoassembler—the cost to manipulate atoms is very high doing it with conventional methods, things like STM microscopes that each are very expensive and can only move usually a single head around.
The problem statement is “build a machinery set of complex parts able to produce every part used in itself”.
So it’s this enormous upfront investment, it looks nothing like anything people have made before, we can’t even “see” what we are doing, and the machinery needed is very very complicated and made using methods people have never used before.
Note that we don’t have macroscale assemblers. 3d printers can’t copy themselves, nothing can. We fill in the gaps in our industrial manufacturing lines with humans who do the steps robots aren’t doing. Humans can’t be used to gap-fill at the nanoscale.
I don’t see a basis for the “30 year” estimate. The Manhattan project was “let’s purify an isotope we know exists so it will chain react, and let’s make another fissionable by exposing it to a neutron flux from the first reactor”. There were a lot of difficulties but it was a straightforward, reasonably simple thing to do. “purify this isotope”, “let nature make this other element (plutonium)”, “purify that”.
There were several methods tried for each step and as it so happened almost everything eventually worked.
Arguably if you think seriously about what a nanoassembler will require the answer’s obvious. You need superintelligence—a form of trees of narrow AI that can learn from a million experiments in parallel, have subgoals to find routes to a nanoassembler, run millions of robotic STM stations in parallel, and systematically find ways around the problems.
The sheer amount of data and decisions that would have to be made to produce a working nanoassembler is likely simply beyond human intelligence at all and always was. You might need millions or billions of people or more to do it with meatware.
The “mechanism” you describe for AGI does not at all sound like something that will produce results within any predictable time.
? Did you not read https://www.deepmind.com/publications/a-generalist-agent or https://github.com/google/BIG-bench or https://cloud.google.com/automl or any of the others?
The “mechanism” as I describe it is succinctly, ’what Google is already doing, but 1-3 orders of magnitdue higher”. Gato solves ~200 tasks to human level. How many tasks does the average human learn to do competently in their lifetime? 2000? 20k? 200k?
It simply doesn’t matter which it is, all are are within the space of “could plausibly solve within 10 years”.
Whatever it is, it’s bounded, and likely the same architecture can be extended to handle all the tasks. I mention bootstrapping (because ‘writing software to solve a prompt’ is a task, and ‘designing an AI model to do well on an AGI task’ is a task) because it’s the obvious way to get a huge boost in performance to solve this problem quickly.