I believe you are correct, however, EY argues that a sufficiently intelligent AGI might be able to hack biology and use ribosomes and protein machinery to skip straight to diamondoid self replicating nanotechnology.
I suspect this is impossible—that is, there may exist a sequence of steps that would work and achieve this, but it does not mean that the information exists within the pool of [all scientific data collected by humans] to calculate what those steps are.
Instead you would have to do this iteratively, similar to how humans would do it. Secure a large number of STM microscopes and vacuum chambers. Build, using electron beam lathes or other methods, small parts to test nanoscale bonding strategies. Test many variants and develop a simulation sufficient to design nanoscale machine parts. Iteratively use the prior data to design and test ever larger and more sophisticated assemblies. Then, once you have a simulated path to success and high enough confidence, bootstrap a nanoforge. (bootstrap means you try to choose a path where prior steps on the path make future steps easier)
An ASI holding everyone hostage isn’t a winning scenario for the ASI. Humans are just going to pull the trigger on their own nuclear guns in such a scenario.
The EY scenarios where the ASI wins generally involve deception. Everything is fine, until everyone dies all at the same time from some type of immediately lethal but delayed bioweapon or nanotechnology based weapon.
Botulism toxin is one way this is theoretically achievable—it takes a very small quantity to kill a human. So a ‘time bomb’ of a capsule of it, injected painlessly into most humans using nanotechnology, or a virus that edits our genome and inserts the botulism gene and some mechanism to prevent expression for a few months, or similar method. For one thing, botulism toxin is a protein and is probably much larger than it needs to be...
If all of EY’s scenarios require deception, then detection of deception from rogue AI systems seems like a great place to focus on. Is there anyone working on that problem?
I believe you are correct, however, EY argues that a sufficiently intelligent AGI might be able to hack biology and use ribosomes and protein machinery to skip straight to diamondoid self replicating nanotechnology.
I suspect this is impossible—that is, there may exist a sequence of steps that would work and achieve this, but it does not mean that the information exists within the pool of [all scientific data collected by humans] to calculate what those steps are.
Instead you would have to do this iteratively, similar to how humans would do it. Secure a large number of STM microscopes and vacuum chambers. Build, using electron beam lathes or other methods, small parts to test nanoscale bonding strategies. Test many variants and develop a simulation sufficient to design nanoscale machine parts. Iteratively use the prior data to design and test ever larger and more sophisticated assemblies. Then, once you have a simulated path to success and high enough confidence, bootstrap a nanoforge. (bootstrap means you try to choose a path where prior steps on the path make future steps easier)
An ASI holding everyone hostage isn’t a winning scenario for the ASI. Humans are just going to pull the trigger on their own nuclear guns in such a scenario.
The EY scenarios where the ASI wins generally involve deception. Everything is fine, until everyone dies all at the same time from some type of immediately lethal but delayed bioweapon or nanotechnology based weapon.
Botulism toxin is one way this is theoretically achievable—it takes a very small quantity to kill a human. So a ‘time bomb’ of a capsule of it, injected painlessly into most humans using nanotechnology, or a virus that edits our genome and inserts the botulism gene and some mechanism to prevent expression for a few months, or similar method. For one thing, botulism toxin is a protein and is probably much larger than it needs to be...
If all of EY’s scenarios require deception, then detection of deception from rogue AI systems seems like a great place to focus on. Is there anyone working on that problem?
https://www.lesswrong.com/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion
Eric Drexler is.