One thing in the posts I found surprising was Eliezers assertion that you needed a dangerous superintelligence to get nanotech. If the AI is expected to do everything itself, including inventing the concept of nanotech, I agree that this is dangerously superintelligent.
However, suppose Alpha Quantum can reliably approximate the behaviour of almost any particle configuration. Not literally any, it can’t run a quantum computer factorizing large numbers better than factoring algorithms, but enough to design a nanomachine. (It has been trained to approximate the ground truth of quantum mechanics equations, and it does this very well.)
For example, you could use IDA, start training to imitate a simulation of a handful of particles, then compose several smaller nets into one large one.
Add a nice user interface and we can drag and drop atoms.
You can add optimization, gradient descent trying to maximize the efficiency of a motor, or minimize the size of a logic gate. All of this is optimised to fit a simple equation, so assuming you don’t have smart general mesaoptimizers forming, and deducing how to manipulate humans based on very little info about humans, you should be safe. Again, designing a few nanogears by gradient descent techniques and shallow heuristics shouldn’t be hard. You also want to make sure not to design a nanocomputer containing a UFAI, but a computer is fairly large and obvious. (Optimizing for the smallest logic gate won’t produce a UFAI.)
If the humans want to make a nanocomputer, they download an existing chip schematic, and scale it down, replacing the logic gates with nanologic.
The first physical hardware would be a minimal nanoassembler. Analogue signals going from macroscopic to nanoscopic. The nanoassembler is a robotic arm. All the control decisions, the digital to analogue conversion, that’s all macroscopic. This is of course, all in lab conditions. Perhaps this is produced with a scanning tunnelling microscope. Perhaps carefully designed proteins.
Once you have this, it shouldn’t be too hard to bootstrap to create anything you can design.
Basically I don’t think it is too hard for humans to create nanotech with the use of some narrowish and dumb AI. And I am wondering if this changes the strategic picture at all?
(Not very sure I understood your description right, but here is my take:)
I think your proposal is not explaining some crucial steps, which are in fact hard. In particular, I understood it as “you have AI which can give you blueprints for nano sized machines”. But I think we already have some blueprints, this isn’t an issue. How we assemble them is an issue.
I expect that there will be more issues like this that you would find if you tried writing the plan in more detail.
However, I share the general sentiment behind your post—I also don’t understand why you can’t get some pivotal act by combining human intelligence with some narrow AI. I expect that Eliezer have tried to come up with such combinations and came away with some general takeaways on this being not realistic. But I haven’t done this exercise, so it seems not obvious to me. Perhaps it would be beneficial if many more people tried doing the exercise and then communicated the takeaways.
I got the impression Eliezer’s claiming that a dangerous superintelligence is merely sufficient for nanotech.
No, I’m pretty confident Eliezer thinks AGI is both necessary and sufficient for nanotech. (Realistically/probabilistically speaking, given plausible levels of future investment into each tech. Obviously it’s not logically necessary or sufficient.) Cf. my summary of Nate’s view in Nate’s reply to Joe Carlsmith:
Nate agrees that if there’s a sphexish way to build world-saving nanosystems, then this should immediately be the top priority, and would be the best way to save the world (that’s currently known to us). Nate doesn’t predict that this is feasible, but it is on his list of the least-unlikely ways things could turn out well, out of the paths Nate can currently name in advance. (Most of Nate’s hope for the future comes from some other surprise occurring that he hasn’t already thought of.)
(I read “sphexish” here as a special case of “narrow AI” / “shallow cognition”, doing more things as a matter of pre-programmed reflex rather than as a matter of strategic choice.)
One thing in the posts I found surprising was Eliezers assertion that you needed a dangerous superintelligence to get nanotech. If the AI is expected to do everything itself, including inventing the concept of nanotech, I agree that this is dangerously superintelligent.
However, suppose Alpha Quantum can reliably approximate the behaviour of almost any particle configuration. Not literally any, it can’t run a quantum computer factorizing large numbers better than factoring algorithms, but enough to design a nanomachine. (It has been trained to approximate the ground truth of quantum mechanics equations, and it does this very well.)
For example, you could use IDA, start training to imitate a simulation of a handful of particles, then compose several smaller nets into one large one.
Add a nice user interface and we can drag and drop atoms.
You can add optimization, gradient descent trying to maximize the efficiency of a motor, or minimize the size of a logic gate. All of this is optimised to fit a simple equation, so assuming you don’t have smart general mesaoptimizers forming, and deducing how to manipulate humans based on very little info about humans, you should be safe. Again, designing a few nanogears by gradient descent techniques and shallow heuristics shouldn’t be hard. You also want to make sure not to design a nanocomputer containing a UFAI, but a computer is fairly large and obvious. (Optimizing for the smallest logic gate won’t produce a UFAI.)
If the humans want to make a nanocomputer, they download an existing chip schematic, and scale it down, replacing the logic gates with nanologic.
The first physical hardware would be a minimal nanoassembler. Analogue signals going from macroscopic to nanoscopic. The nanoassembler is a robotic arm. All the control decisions, the digital to analogue conversion, that’s all macroscopic. This is of course, all in lab conditions. Perhaps this is produced with a scanning tunnelling microscope. Perhaps carefully designed proteins.
Once you have this, it shouldn’t be too hard to bootstrap to create anything you can design.
Basically I don’t think it is too hard for humans to create nanotech with the use of some narrowish and dumb AI. And I am wondering if this changes the strategic picture at all?
(Not very sure I understood your description right, but here is my take:)
I think your proposal is not explaining some crucial steps, which are in fact hard. In particular, I understood it as “you have AI which can give you blueprints for nano sized machines”. But I think we already have some blueprints, this isn’t an issue. How we assemble them is an issue.
I expect that there will be more issues like this that you would find if you tried writing the plan in more detail.
However, I share the general sentiment behind your post—I also don’t understand why you can’t get some pivotal act by combining human intelligence with some narrow AI. I expect that Eliezer have tried to come up with such combinations and came away with some general takeaways on this being not realistic. But I haven’t done this exercise, so it seems not obvious to me. Perhaps it would be beneficial if many more people tried doing the exercise and then communicated the takeaways.
I think it would be!
Uh, how big do you think contemporary chips are?
Like 10s of atoms across. So you aren’t scaling down that much. (Most of your performance gains are in being able to stack your chips or whatever.
I got the impression Eliezer’s claiming that a dangerous superintelligence is merely sufficient for nanotech.
How would you save us with nanotech? It had better be good given all the hardware progress you just caused!
No, I’m pretty confident Eliezer thinks AGI is both necessary and sufficient for nanotech. (Realistically/probabilistically speaking, given plausible levels of future investment into each tech. Obviously it’s not logically necessary or sufficient.) Cf. my summary of Nate’s view in Nate’s reply to Joe Carlsmith:
(I read “sphexish” here as a special case of “narrow AI” / “shallow cognition”, doing more things as a matter of pre-programmed reflex rather than as a matter of strategic choice.)