I’m not sure that’s correct. Modern supply chains are incredibly complex, and manufacturing techniques for advanced technology is incredibly sophisticated and low tolerance. My guess is your one robot and your solar panels will wear down long before you get to the stage where you can manufacture new chips/photovoltaic panels.
I think the main risk here is that it can continue scavenging existing tech for long enough to keep growing until it can bootstrap tech of its own. Still seems relatively risky for the AI compared to waiting till it’s self sufficient before destroying humanity.
Yeah, basically agree. When you have only a few ways of interacting with the world, you’re at the mercy of accidents until you can get redundancy—i.e. use a robot to build another robot out of scavenged parts. But ofc waiting longer until you’re self-sufficient also carries risks.
I think the idea is that the robot there’s an AGI server and a solar cell and one teleoperated robot body in an otherwise-empty post-apocalyptic Earth, and then that one teleoperated robot body could build a janky second teleoperated robot body from salvaged car parts or whatever, and then the two of them could find more car parts to build a third and fourth, and those four could build up to eight, etc.
I agree that literally one robot wouldn’t get much done.
manufacture new chips/photovoltaic panels
I think chips would be much much more likely to be a limiter than solar panels. Existing rooftop solar panels are I think designed to last 25-30 years, and would probably still work OK long after that. There are lots of solar cell types (not just silicon, but also dye-sensitized, polymer, amorphous silicon, perovskite, various other types of thin-film, etc.). I don’t know the whole supply chain for any of those but I strongly suspect that at least some approach is straightforward compared to chips.
Modern supply chains are incredibly complex, and manufacturing techniques for advanced technology is incredibly sophisticated and low tolerance.
I don’t think we can infer much from that. Humans are not optimizing for simple supply chains, they’re optimizing for cost at scale. For example, the robots could build a bunch of e-beam lithography machines instead of EUV photolithography; it would be WAY slower and more capital-intensive, so humans would never do that, but maybe an AI would, because the underlying tech is much simpler (I think).
and then that one teleoperated robot body could build a janky second teleoperated robot body from salvaged car parts or whatever
My suspicion is that you lose reliability and finesse each time you do this, and cause some wear and tear on the original robot, such that this approach doesn’t bootstrap.
Yes, it is unlikely to succeed, but what is the minimum number of robots for success ? In Terminator 3 movie, Skynet has access only to a few drones, and according to apocryphs, had to use enslave humans to built first mass robots. This seems inconsistent with starting nuclear war, as most chip production will be destroyed in such war.
In other words, the main problem of Terminators’ scenario not that Skynet used humanoid robots to exterminate humans, but that Skynet damaged its robot-building ability by prematurely starting nuclear war.
I’m not sure that’s correct. Modern supply chains are incredibly complex, and manufacturing techniques for advanced technology is incredibly sophisticated and low tolerance. My guess is your one robot and your solar panels will wear down long before you get to the stage where you can manufacture new chips/photovoltaic panels.
I think the main risk here is that it can continue scavenging existing tech for long enough to keep growing until it can bootstrap tech of its own. Still seems relatively risky for the AI compared to waiting till it’s self sufficient before destroying humanity.
The chances of this scenario grows in time as we will have more autonomous robots soon.
Also if there will be 10 or 1000 preserved robots, chances are also higher. 1000 robots could be found on a factory.
Yeah, basically agree. When you have only a few ways of interacting with the world, you’re at the mercy of accidents until you can get redundancy—i.e. use a robot to build another robot out of scavenged parts. But ofc waiting longer until you’re self-sufficient also carries risks.
I think the idea is that the robot there’s an AGI server and a solar cell and one teleoperated robot body in an otherwise-empty post-apocalyptic Earth, and then that one teleoperated robot body could build a janky second teleoperated robot body from salvaged car parts or whatever, and then the two of them could find more car parts to build a third and fourth, and those four could build up to eight, etc.
I agree that literally one robot wouldn’t get much done.
I think chips would be much much more likely to be a limiter than solar panels. Existing rooftop solar panels are I think designed to last 25-30 years, and would probably still work OK long after that. There are lots of solar cell types (not just silicon, but also dye-sensitized, polymer, amorphous silicon, perovskite, various other types of thin-film, etc.). I don’t know the whole supply chain for any of those but I strongly suspect that at least some approach is straightforward compared to chips.
I don’t think we can infer much from that. Humans are not optimizing for simple supply chains, they’re optimizing for cost at scale. For example, the robots could build a bunch of e-beam lithography machines instead of EUV photolithography; it would be WAY slower and more capital-intensive, so humans would never do that, but maybe an AI would, because the underlying tech is much simpler (I think).
Mostly agree. Just one point:
My suspicion is that you lose reliability and finesse each time you do this, and cause some wear and tear on the original robot, such that this approach doesn’t bootstrap.
Yes, it is unlikely to succeed, but what is the minimum number of robots for success ? In Terminator 3 movie, Skynet has access only to a few drones, and according to apocryphs, had to use enslave humans to built first mass robots. This seems inconsistent with starting nuclear war, as most chip production will be destroyed in such war.
In other words, the main problem of Terminators’ scenario not that Skynet used humanoid robots to exterminate humans, but that Skynet damaged its robot-building ability by prematurely starting nuclear war.