The latter. I don’t see any reason why a superintelligent entity would not be able to take over the world or destroy it or dismantle it into a Dyson swarm. The point I am trying to make is that the tooling and structures that a superintelligent AGI would need to act autonomously in that way do not actually exist in our current world, so before we can be made into paperclips, there is a necessary period of bootstrapping where the superintelligent AGI designs and manufactures new machinery using our current machinery. Whether it’s an unsafe AGI that is trying to go rogue, or an aligned AGI that is trying to execute a “pivotal act”, the same bootstrapping must occur first.
Case study: a common idea I’ve seen while lurking on LessWrong and SSC/ACT for the past N years is that an AGI will “just” hack a factory and get it to produce whatever designs it wants. This is not how factories work. There is no 100% autonomous factory on Earth that an AGI could just take over to make some other widget instead. Even highly automated factories are 1.) highly automated to produce a specific set of widgets, 2.) require physical adjustments to make different widgets, and 3.) rely on humans for things like input of raw materials, transferring in-work products between automated lines, and the testing or final assembly of completed products. 3D printers are one of the worst offenders in this regard. The public perception is that a 3D printer can produce anything and everything, but they actually have pretty strong constraints on what types of shapes they can make and what materials they can use, and usually require multi-step processes to avoid those constraints, or post-processing to clean up residual pieces that aren’t intended to be part of the final design, and almost always a 3D printer is producing sub-parts of a larger design that still must be assembled together with bolts or screws or welds or some other fasteners.
So if an AGI wants to have unilateral control where it can do whatever it wants, the very first prerequisite is that it needs to make a futuristic, fully automated, fully configurable, network-controlled factory exist—which then needs to be built with what we have now, and that’s where you’ll hit the supply constraints I’m describing above for things like lead times on part acquisition. The only way to reduce this bootstrapping time is to have this stuff designed in advance of the AGI, but that’s backwards from how modern product development actually works. We design products, and then we design the automated tooling to build those products. If you asked me to design a factory that would be immediately usable by a future AGI, I wouldn’t know where to even start with that request. I need the AGI to tell me what it wants, and then I can build that, and then the AGI can takeover and do their own thing.
A related point that I think gets missed is that our automated factories aren’t necessarily “fast” in a way you’d expect. There’s long lead times for complex products. If you have the specialized machinery for creating new chips, you’re still looking at ~14-24 weeks from when raw materials are introduced to when the final products roll off the line. We hide that delay by constantly building the same things all of the time, but it’s very visibly when there’s a sudden demand spike—that’s why it takes so long before the supply can match the demand for products like processors or GPUs. I have no trouble with imagining a superintelligent entity that could optimize this and knock down the cycle time, but there’s going to be physical limits to these processes and the question is can it knock it down to 10 weeks or to 1 week? And when I’m talking about optimization, this isn’t just uploading new software because that isn’t how these machines work. It’s designing new, faster machines or redesigning the assembly line and replacing the existing machines, so there’s a minimum time required for that too before you can benefit from the faster cycle time on actually making things. Once you hit practical limits on cycle time, the only way to get more stuff faster is to scale wide by building more factories or making your current factories even larger.
If we want to try and avoid the above problems by suggesting that the AGI doesn’t actually hack existing factories, but instead it convinces the factory owners to build the things it wants instead, there’s not a huge difference—instead of the prerequisite here being “build your own factory”, it’s “hostile takeover of existing factory”, where that hostile takeover is either done by manipulation, on the public market, as a private sale, or by outbidding existing customers (e.g. have enough money to convince TSMC to make your stuff instead of Apple’s), or with actual arms and violence. There’s still the other lead times I’ve mentioned for retooling assembly lines and actually building a complete, physical system from one or more automated lines.
You should stop thinking about AI designed nanotechnology like human technology and start thinking about it like actual nanotechnology, i.e. life. There is no reason to believe you can’t come up with a design for self-replicating nanorobots that can also self-assemble into larger useful machines, all from very simple and abundant ingredients—life does exactly that.
Tangent: I don’t think I understand the distinction you’ve made between “AI designed nanotechnology” and “human technology”. Human technology already includes “actual nanotechnology”, e.g. nanolithography in semiconductor production.
I agree that if the AGI gives us a blueprint for the smallest self-replicating nanobot that we’ll need to bootstrap the rest of the nanobot swarm, all we have to do is assemble that first nanobot, and the rest follows. It’s very elegant.
We still need to build that very first self-replicating nanobot though.
We can either do so atom-by-atom with some type of molecular assembler like the ones discussed in Nanosystems, or we can synthesize DNA and use clever tricks to get some existing biology to build things we want for us, or maybe we can build it from a process that the AGI gives us that only uses chemical reactions or lab/industrial production techniques.
If we go with the molecular assembler approach, we need to build one of those first, so that we can build the first self-replicating nanobot. This is effectively the same argument I made above, so I’m going to skip it.
If we go with the DNA approach, then the AGI needs to give us that DNA sequence, and we have to hope that we can create it in a reasonable time despite our poor yield rate and time for on DNA synthesis on longer sequences. If the sequence is too long, we might be in a place where we first need to ask the AGI to design new DNA synthesis machines, otherwise we’ll be stuck. In that world, we return to my arguments above. In the world where the AGI gave us a reasonably length DNA sequence, say the size of a very small cell or something, we can continue. The COVID-19 vaccine provides an example of how this goes. We have an intelligent entity (humans) writing code in DNA, synthesizing that DNA, converting it to mRNA, and getting a biological system (human cells) to read that code and produce proteins. Humanity has these tools. I am not sure why we would assume that the company that develops AGI has them. At multiple steps in the chain of what Pfizer and Moderna did to bring mRNA vaccines to market, there are single vendor gatekeepers who hold the only tooling or processes for industrial production. If we assume that you have all of the tooling and processes, we still need to talk about cycle times. I believe Pfizer aimed to get the cycle time (raw materials → synthesized vaccines) for a batch of vaccine down from 15 weeks to 8 weeks. This is an incredibly complex, amazing achievement—we literally wrote a program in DNA, created a way to deliver it to the human body, and it executed successfully in that environment. However, it’s also an example of the current limitations we have. Synthesizing from scratch the mRNA needed to generate a single protein takes >8 weeks, even if you have the full assembly line figured out. This will get faster in time, and we’ll get better at doing it, but I don’t see any reason to think that we’ll have some type of universal / programmable assembly line for an AGI to use anytime soon.
If we go with a series of chemical reactions/lab/industrial production techniques, we need to build clean rooms and labs and vacuum chambers and whatever else is used to implement whatever process the AGI gives us for synthesizing the nanobots. Conceptually this is the simplest idea for how you could get something to work quickly. If the AGI gave you a list of chemicals, metals, biological samples and a step-by-step process of how to mix, drain, heat, sift, repeat, and at the end of this process you had self-replicating nanobots, that would be pretty cool. This is basically taking evolution’s random walk from a planetary petri dish to the life we see today and asking, “could an AGI shorten the duration from a billion years of random iterative development into mere weeks of some predetermined process to get the first self-replicating nanobots?” The problem with programming is that interpreting code is hard. Anything that can interpret the nanobot equivalent of machine code, like instructions for how and where to melt GPU factories, is going to be vastly more complex than the current state-of-the-heart R&D being done by any human lab today. I don’t see a way where this doesn’t reduce to the same Factorio problem I’ve been describing. We’ll first need to synthesize A, so that we can synthesize B, so that we can synthesize C, so that we can synthesize D, and each step will require novel setups and production lines and time, and at the end of it we’ll have a sequence of steps that looks an awful lot like a molecular assembly line for the creation of the very first self-replicating nanobots.
The hypothetical world(s) where these types of constraints aren’t problems for a “pivotal act” are world(s) where the AGI can give us a recipe for the self-replicating nanobots that we can build in our living room at home with a pair of tweezers and materials from Amazon. The progression of human technology over the past ~60 years in the fields of nano-scale engineering or synthetic biology has been increasingly elaborate, complex, time-consuming, and low-yield processes or lab equipment to replicate the simplest structures that life produces ad-hoc. I am certain this limitation will be conquered, and I’m equally certain that AI/ML systems will be instrumental in doing so, but I have no evidence to rationally conclude that there’s not a mountain of prerequisite tools still remaining for humanity to build before something like “design anything at any scale” capabilities are generally available in a way that an AGI could make use of them.
Tangent: If we’re concerned about destroying the world, deliberately building self-replicating nanobots that start simple but rapidly assemble into something arbitrarily complex from the whims of an AGI seems like a bad idea, which is why my original post was focused on a top-down hardware/software systems engineering process where the humans involved could presumably understand the plans, schematics, and programming that the AGI handed to them prior to the construction and deployment of those nanobots.
The latter. I don’t see any reason why a superintelligent entity would not be able to take over the world or destroy it or dismantle it into a Dyson swarm. The point I am trying to make is that the tooling and structures that a superintelligent AGI would need to act autonomously in that way do not actually exist in our current world, so before we can be made into paperclips, there is a necessary period of bootstrapping where the superintelligent AGI designs and manufactures new machinery using our current machinery. Whether it’s an unsafe AGI that is trying to go rogue, or an aligned AGI that is trying to execute a “pivotal act”, the same bootstrapping must occur first.
Case study: a common idea I’ve seen while lurking on LessWrong and SSC/ACT for the past N years is that an AGI will “just” hack a factory and get it to produce whatever designs it wants. This is not how factories work. There is no 100% autonomous factory on Earth that an AGI could just take over to make some other widget instead. Even highly automated factories are 1.) highly automated to produce a specific set of widgets, 2.) require physical adjustments to make different widgets, and 3.) rely on humans for things like input of raw materials, transferring in-work products between automated lines, and the testing or final assembly of completed products. 3D printers are one of the worst offenders in this regard. The public perception is that a 3D printer can produce anything and everything, but they actually have pretty strong constraints on what types of shapes they can make and what materials they can use, and usually require multi-step processes to avoid those constraints, or post-processing to clean up residual pieces that aren’t intended to be part of the final design, and almost always a 3D printer is producing sub-parts of a larger design that still must be assembled together with bolts or screws or welds or some other fasteners.
So if an AGI wants to have unilateral control where it can do whatever it wants, the very first prerequisite is that it needs to make a futuristic, fully automated, fully configurable, network-controlled factory exist—which then needs to be built with what we have now, and that’s where you’ll hit the supply constraints I’m describing above for things like lead times on part acquisition. The only way to reduce this bootstrapping time is to have this stuff designed in advance of the AGI, but that’s backwards from how modern product development actually works. We design products, and then we design the automated tooling to build those products. If you asked me to design a factory that would be immediately usable by a future AGI, I wouldn’t know where to even start with that request. I need the AGI to tell me what it wants, and then I can build that, and then the AGI can takeover and do their own thing.
A related point that I think gets missed is that our automated factories aren’t necessarily “fast” in a way you’d expect. There’s long lead times for complex products. If you have the specialized machinery for creating new chips, you’re still looking at ~14-24 weeks from when raw materials are introduced to when the final products roll off the line. We hide that delay by constantly building the same things all of the time, but it’s very visibly when there’s a sudden demand spike—that’s why it takes so long before the supply can match the demand for products like processors or GPUs. I have no trouble with imagining a superintelligent entity that could optimize this and knock down the cycle time, but there’s going to be physical limits to these processes and the question is can it knock it down to 10 weeks or to 1 week? And when I’m talking about optimization, this isn’t just uploading new software because that isn’t how these machines work. It’s designing new, faster machines or redesigning the assembly line and replacing the existing machines, so there’s a minimum time required for that too before you can benefit from the faster cycle time on actually making things. Once you hit practical limits on cycle time, the only way to get more stuff faster is to scale wide by building more factories or making your current factories even larger.
If we want to try and avoid the above problems by suggesting that the AGI doesn’t actually hack existing factories, but instead it convinces the factory owners to build the things it wants instead, there’s not a huge difference—instead of the prerequisite here being “build your own factory”, it’s “hostile takeover of existing factory”, where that hostile takeover is either done by manipulation, on the public market, as a private sale, or by outbidding existing customers (e.g. have enough money to convince TSMC to make your stuff instead of Apple’s), or with actual arms and violence. There’s still the other lead times I’ve mentioned for retooling assembly lines and actually building a complete, physical system from one or more automated lines.
You should stop thinking about AI designed nanotechnology like human technology and start thinking about it like actual nanotechnology, i.e. life. There is no reason to believe you can’t come up with a design for self-replicating nanorobots that can also self-assemble into larger useful machines, all from very simple and abundant ingredients—life does exactly that.
Tangent: I don’t think I understand the distinction you’ve made between “AI designed nanotechnology” and “human technology”. Human technology already includes “actual nanotechnology”, e.g. nanolithography in semiconductor production.
I agree that if the AGI gives us a blueprint for the smallest self-replicating nanobot that we’ll need to bootstrap the rest of the nanobot swarm, all we have to do is assemble that first nanobot, and the rest follows. It’s very elegant.
We still need to build that very first self-replicating nanobot though.
We can either do so atom-by-atom with some type of molecular assembler like the ones discussed in Nanosystems, or we can synthesize DNA and use clever tricks to get some existing biology to build things we want for us, or maybe we can build it from a process that the AGI gives us that only uses chemical reactions or lab/industrial production techniques.
If we go with the molecular assembler approach, we need to build one of those first, so that we can build the first self-replicating nanobot. This is effectively the same argument I made above, so I’m going to skip it.
If we go with the DNA approach, then the AGI needs to give us that DNA sequence, and we have to hope that we can create it in a reasonable time despite our poor yield rate and time for on DNA synthesis on longer sequences. If the sequence is too long, we might be in a place where we first need to ask the AGI to design new DNA synthesis machines, otherwise we’ll be stuck. In that world, we return to my arguments above. In the world where the AGI gave us a reasonably length DNA sequence, say the size of a very small cell or something, we can continue. The COVID-19 vaccine provides an example of how this goes. We have an intelligent entity (humans) writing code in DNA, synthesizing that DNA, converting it to mRNA, and getting a biological system (human cells) to read that code and produce proteins. Humanity has these tools. I am not sure why we would assume that the company that develops AGI has them. At multiple steps in the chain of what Pfizer and Moderna did to bring mRNA vaccines to market, there are single vendor gatekeepers who hold the only tooling or processes for industrial production. If we assume that you have all of the tooling and processes, we still need to talk about cycle times. I believe Pfizer aimed to get the cycle time (raw materials → synthesized vaccines) for a batch of vaccine down from 15 weeks to 8 weeks. This is an incredibly complex, amazing achievement—we literally wrote a program in DNA, created a way to deliver it to the human body, and it executed successfully in that environment. However, it’s also an example of the current limitations we have. Synthesizing from scratch the mRNA needed to generate a single protein takes >8 weeks, even if you have the full assembly line figured out. This will get faster in time, and we’ll get better at doing it, but I don’t see any reason to think that we’ll have some type of universal / programmable assembly line for an AGI to use anytime soon.
If we go with a series of chemical reactions/lab/industrial production techniques, we need to build clean rooms and labs and vacuum chambers and whatever else is used to implement whatever process the AGI gives us for synthesizing the nanobots. Conceptually this is the simplest idea for how you could get something to work quickly. If the AGI gave you a list of chemicals, metals, biological samples and a step-by-step process of how to mix, drain, heat, sift, repeat, and at the end of this process you had self-replicating nanobots, that would be pretty cool. This is basically taking evolution’s random walk from a planetary petri dish to the life we see today and asking, “could an AGI shorten the duration from a billion years of random iterative development into mere weeks of some predetermined process to get the first self-replicating nanobots?” The problem with programming is that interpreting code is hard. Anything that can interpret the nanobot equivalent of machine code, like instructions for how and where to melt GPU factories, is going to be vastly more complex than the current state-of-the-heart R&D being done by any human lab today. I don’t see a way where this doesn’t reduce to the same Factorio problem I’ve been describing. We’ll first need to synthesize A, so that we can synthesize B, so that we can synthesize C, so that we can synthesize D, and each step will require novel setups and production lines and time, and at the end of it we’ll have a sequence of steps that looks an awful lot like a molecular assembly line for the creation of the very first self-replicating nanobots.
The hypothetical world(s) where these types of constraints aren’t problems for a “pivotal act” are world(s) where the AGI can give us a recipe for the self-replicating nanobots that we can build in our living room at home with a pair of tweezers and materials from Amazon. The progression of human technology over the past ~60 years in the fields of nano-scale engineering or synthetic biology has been increasingly elaborate, complex, time-consuming, and low-yield processes or lab equipment to replicate the simplest structures that life produces ad-hoc. I am certain this limitation will be conquered, and I’m equally certain that AI/ML systems will be instrumental in doing so, but I have no evidence to rationally conclude that there’s not a mountain of prerequisite tools still remaining for humanity to build before something like “design anything at any scale” capabilities are generally available in a way that an AGI could make use of them.
Tangent: If we’re concerned about destroying the world, deliberately building self-replicating nanobots that start simple but rapidly assemble into something arbitrarily complex from the whims of an AGI seems like a bad idea, which is why my original post was focused on a top-down hardware/software systems engineering process where the humans involved could presumably understand the plans, schematics, and programming that the AGI handed to them prior to the construction and deployment of those nanobots.