I just found Eric Drexler’s “Paretotopia” idea/talk. It seems great to me; it seems like it should be one of the pillars of AI governance strategy. It also seems highly relevant to technical AI safety (though that takes much more work to explain).
Why isn’t this being discussed more? What are the arguments against it?
Without watching the video, prior knowledge of the nanomachinery proposals show that a simple safety mechanism is feasible.
No nanoscale robotic system can should be permitted to store more than a small fraction of the digital file containing the instructions to replicate itself. Nor should it have sufficient general purpose memory to be capable of this.
This simple rule makes nanotechnology safe from grey goo. It becomes nearly impossible as any system that gets out of control will have a large, macroscale component you can turn off. It’s also testable, you can look at the design of a system and determine if it meets the rule or not.
AI alignment is kinda fuzzy and I haven’t heard of a simple testable rule. Umm, also if such a rule exists then MIRI would have an incentive not to discuss it.
At least for near term agents we can talk about such rules. They have to do with domain bounding. For example, the heuristic for a “paperclip manufacturing subsystem” must include terms in the heuristic for “success” that limit the size of the paperclip manufacturing machinery. These terms should be redundant and apply more than a single check. So for example, the agent might:
Seek maximum paperclips produced with large penalty for : (greater than A volume of machinery, greater than B tonnage of machinery, machinery outside of markers C, greater than D probability of a human killed, greater than E probability of an animal harmed, greater than F total network devices, greater than G ..)
Essentially any of these redundant terms are “circuit breakers” and if any trip the agent will not consider an action further.
“Does the agent have scope-limiting redundant circuit breakers” is a testable design constraint. While “is it going to be friendly to humans” is rather more difficult.
No nanoscale robotic system … should be permitted to store more than a small fraction of the digital file containing the instructions to replicate itself.
The point was to outlaw artificial molecular assemblers like Drexler described in Engines of Creation. Think of maybe something like bacteria but with cell walls made of diamond. They might be hard to deal with once released into the wild. Diamond is just carbon, so they could potentially consume carbon-based life, but no natural organism could eat them. This is the “ecophagy” scenario.
But, I still think this is a fair objection. Some paths to molecular nanotechnology might go through bio-engineering, the so-called “wet nanotechnology” approach. We’d start with something like a natural bacterium, and then gradually replace components of the cell with synthetic chemicals, like amino acid analogues or extra base pairs or codons, which lets us work in an expanded universe of “proteins” that might be easier to engineer as well as having capabilities natural biology couldn’t match. This kind of thing is already starting to happen. At what point does the law against self-replication kick in? The wet path is infeasible without it, at least early on.
The point was to outlaw artificial molecular assemblers like Drexler described in Engines of Creation.
Not outlaw. Prohibit “free floating” ones that can work without any further input (besides raw materials). Allowed assemblers would be connected via network ports to a host computer system that has the needed digital files, kept in something that is large enough for humans to see it/break it with a fire axe or shotgun.
Note that making bacteria with gene knockouts so they can’t replicate solely on their own, but have to be given specific amino acids in a nutrient broth, would be a way to retain control if you needed to do it the ‘wet’ way.
The law against self replication is the same testable principle, actually—putting the gene knockouts back would be breaking the law because each wet modified bacteria has all the components in itself to replicate itself again.
life on earth is more than likely stuck at a local maxima among the set of all possible self-replicating nanorobotic systems.
The grey goo scenario posits you could build tiny fully artificial nanotechnological ‘cells’, made of more durable and reliable parts, that could be closer to the global maxima for self-replicating nanorobotic systems.
These would then outcompete all life, bacteria included, and convert the biosphere to an ocean of copies of this single system. People imagine each cellular unit might be made of metal, hence it would look grey to the naked eye, hence ‘grey goo’. (I won’t speculate how they might be constructed, except to note that you would use AI agents to find designs for these machines. The AI agents would do most of their exploring in a simulation and some exploring using a vast array of prototype ‘nanoforges’ that are capable of assembling test components and full designs. So the AI agents would be capable of considering any known element and any design pattern known at the time or discovered in the process, then they would be capable of combining these ideas into possible ‘global maxima’ designs. This sharing of information—where any piece from any prototype can be adapted and rescaled to be used in a different new prototype—is something nature can’t do with conventional evolution—hence it could be many times faster )
I just found Eric Drexler’s “Paretotopia” idea/talk. It seems great to me; it seems like it should be one of the pillars of AI governance strategy. It also seems highly relevant to technical AI safety (though that takes much more work to explain).
Why isn’t this being discussed more? What are the arguments against it?
Without watching the video, prior knowledge of the nanomachinery proposals show that a simple safety mechanism is feasible.
No nanoscale robotic system can should be permitted to store more than a small fraction of the digital file containing the instructions to replicate itself. Nor should it have sufficient general purpose memory to be capable of this.
This simple rule makes nanotechnology safe from grey goo. It becomes nearly impossible as any system that gets out of control will have a large, macroscale component you can turn off. It’s also testable, you can look at the design of a system and determine if it meets the rule or not.
AI alignment is kinda fuzzy and I haven’t heard of a simple testable rule. Umm, also if such a rule exists then MIRI would have an incentive not to discuss it.
At least for near term agents we can talk about such rules. They have to do with domain bounding. For example, the heuristic for a “paperclip manufacturing subsystem” must include terms in the heuristic for “success” that limit the size of the paperclip manufacturing machinery. These terms should be redundant and apply more than a single check. So for example, the agent might:
Seek maximum paperclips produced with large penalty for : (greater than A volume of machinery, greater than B tonnage of machinery, machinery outside of markers C, greater than D probability of a human killed, greater than E probability of an animal harmed, greater than F total network devices, greater than G ..)
Essentially any of these redundant terms are “circuit breakers” and if any trip the agent will not consider an action further.
“Does the agent have scope-limiting redundant circuit breakers” is a testable design constraint. While “is it going to be friendly to humans” is rather more difficult.
Will you outlaw bacteria?
The point was to outlaw artificial molecular assemblers like Drexler described in Engines of Creation. Think of maybe something like bacteria but with cell walls made of diamond. They might be hard to deal with once released into the wild. Diamond is just carbon, so they could potentially consume carbon-based life, but no natural organism could eat them. This is the “ecophagy” scenario.
But, I still think this is a fair objection. Some paths to molecular nanotechnology might go through bio-engineering, the so-called “wet nanotechnology” approach. We’d start with something like a natural bacterium, and then gradually replace components of the cell with synthetic chemicals, like amino acid analogues or extra base pairs or codons, which lets us work in an expanded universe of “proteins” that might be easier to engineer as well as having capabilities natural biology couldn’t match. This kind of thing is already starting to happen. At what point does the law against self-replication kick in? The wet path is infeasible without it, at least early on.
The point was to outlaw artificial molecular assemblers like Drexler described in Engines of Creation.
Not outlaw. Prohibit “free floating” ones that can work without any further input (besides raw materials). Allowed assemblers would be connected via network ports to a host computer system that has the needed digital files, kept in something that is large enough for humans to see it/break it with a fire axe or shotgun.
Note that making bacteria with gene knockouts so they can’t replicate solely on their own, but have to be given specific amino acids in a nutrient broth, would be a way to retain control if you needed to do it the ‘wet’ way.
The law against self replication is the same testable principle, actually—putting the gene knockouts back would be breaking the law because each wet modified bacteria has all the components in itself to replicate itself again.
I didn’t create this rule. But succinctly:
life on earth is more than likely stuck at a local maxima among the set of all possible self-replicating nanorobotic systems.
The grey goo scenario posits you could build tiny fully artificial nanotechnological ‘cells’, made of more durable and reliable parts, that could be closer to the global maxima for self-replicating nanorobotic systems.
These would then outcompete all life, bacteria included, and convert the biosphere to an ocean of copies of this single system. People imagine each cellular unit might be made of metal, hence it would look grey to the naked eye, hence ‘grey goo’. (I won’t speculate how they might be constructed, except to note that you would use AI agents to find designs for these machines. The AI agents would do most of their exploring in a simulation and some exploring using a vast array of prototype ‘nanoforges’ that are capable of assembling test components and full designs. So the AI agents would be capable of considering any known element and any design pattern known at the time or discovered in the process, then they would be capable of combining these ideas into possible ‘global maxima’ designs. This sharing of information—where any piece from any prototype can be adapted and rescaled to be used in a different new prototype—is something nature can’t do with conventional evolution—hence it could be many times faster )