I don’t think there is any chance of malign ASI killing everyone off in less than a few years, because it would take a long time to reliably automate the mineral extraction and manufacturing processes and power supplies required to guarantee an ASI in its survival and growth objectives (assuming it is not suicidal). Building precise stuff reliably is really really hard, robotics and many other elements of infrastructure needed are high maintenance, and demanding of high dexterity maintenance agents, and the tech base required to support current leading edge chip manufacturing probably couldn’t be supported by less than a few tens to hundred million humans—that’s a lot of high-performance meat-actuators and squishy compute to supplant. Datacenter’s and their power supplies and cooling systems plus myriad other essential elements will be militarily vulnerable for a long time.
I think we’ll have many years to contemplate our impending doom after ASI is created. Though I wouldn’t be surprised if it quickly created a pathogenic or nuclear gun to hold to our collective heads and prevent our interfering or interrupting its goals.
I also think it won’t be that hard to get large proportion of human population clamoring to halt AI development—with sufficient political and financial strength to stop even rogue nations. A strong innate tendency towards Millennialism exists in a large subset of humans (as does a likely linked general tendency to anxiousness). We see it in the Green movement and redirecting it towards AI is almost certainly achievable with the sorts of budgets that existential alignment danger believers (some billionaires in their ranks) could muster. Social media is a great tool to do these days if you have the budget.
I believe you are correct, however, EY argues that a sufficiently intelligent AGI might be able to hack biology and use ribosomes and protein machinery to skip straight to diamondoid self replicating nanotechnology.
I suspect this is impossible—that is, there may exist a sequence of steps that would work and achieve this, but it does not mean that the information exists within the pool of [all scientific data collected by humans] to calculate what those steps are.
Instead you would have to do this iteratively, similar to how humans would do it. Secure a large number of STM microscopes and vacuum chambers. Build, using electron beam lathes or other methods, small parts to test nanoscale bonding strategies. Test many variants and develop a simulation sufficient to design nanoscale machine parts. Iteratively use the prior data to design and test ever larger and more sophisticated assemblies. Then, once you have a simulated path to success and high enough confidence, bootstrap a nanoforge. (bootstrap means you try to choose a path where prior steps on the path make future steps easier)
An ASI holding everyone hostage isn’t a winning scenario for the ASI. Humans are just going to pull the trigger on their own nuclear guns in such a scenario.
The EY scenarios where the ASI wins generally involve deception. Everything is fine, until everyone dies all at the same time from some type of immediately lethal but delayed bioweapon or nanotechnology based weapon.
Botulism toxin is one way this is theoretically achievable—it takes a very small quantity to kill a human. So a ‘time bomb’ of a capsule of it, injected painlessly into most humans using nanotechnology, or a virus that edits our genome and inserts the botulism gene and some mechanism to prevent expression for a few months, or similar method. For one thing, botulism toxin is a protein and is probably much larger than it needs to be...
If all of EY’s scenarios require deception, then detection of deception from rogue AI systems seems like a great place to focus on. Is there anyone working on that problem?
I don’t think there is any chance of malign ASI killing everyone off in less than a few years, because it would take a long time to reliably automate the mineral extraction and manufacturing processes and power supplies required to guarantee an ASI in its survival and growth objectives (assuming it is not suicidal). Building precise stuff reliably is really really hard, robotics and many other elements of infrastructure needed are high maintenance, and demanding of high dexterity maintenance agents, and the tech base required to support current leading edge chip manufacturing probably couldn’t be supported by less than a few tens to hundred million humans—that’s a lot of high-performance meat-actuators and squishy compute to supplant. Datacenter’s and their power supplies and cooling systems plus myriad other essential elements will be militarily vulnerable for a long time.
I think we’ll have many years to contemplate our impending doom after ASI is created. Though I wouldn’t be surprised if it quickly created a pathogenic or nuclear gun to hold to our collective heads and prevent our interfering or interrupting its goals.
I also think it won’t be that hard to get large proportion of human population clamoring to halt AI development—with sufficient political and financial strength to stop even rogue nations. A strong innate tendency towards Millennialism exists in a large subset of humans (as does a likely linked general tendency to anxiousness). We see it in the Green movement and redirecting it towards AI is almost certainly achievable with the sorts of budgets that existential alignment danger believers (some billionaires in their ranks) could muster. Social media is a great tool to do these days if you have the budget.
https://www.lesswrong.com/posts/CqmDWHLMwybSDTNFe/fighting-for-our-lives-what-ordinary-people-can-do?commentId=dufevXaTzfdKivp35#:~:text=%2B7-,Comment%20Permalink,-Foyle
I believe you are correct, however, EY argues that a sufficiently intelligent AGI might be able to hack biology and use ribosomes and protein machinery to skip straight to diamondoid self replicating nanotechnology.
I suspect this is impossible—that is, there may exist a sequence of steps that would work and achieve this, but it does not mean that the information exists within the pool of [all scientific data collected by humans] to calculate what those steps are.
Instead you would have to do this iteratively, similar to how humans would do it. Secure a large number of STM microscopes and vacuum chambers. Build, using electron beam lathes or other methods, small parts to test nanoscale bonding strategies. Test many variants and develop a simulation sufficient to design nanoscale machine parts. Iteratively use the prior data to design and test ever larger and more sophisticated assemblies. Then, once you have a simulated path to success and high enough confidence, bootstrap a nanoforge. (bootstrap means you try to choose a path where prior steps on the path make future steps easier)
An ASI holding everyone hostage isn’t a winning scenario for the ASI. Humans are just going to pull the trigger on their own nuclear guns in such a scenario.
The EY scenarios where the ASI wins generally involve deception. Everything is fine, until everyone dies all at the same time from some type of immediately lethal but delayed bioweapon or nanotechnology based weapon.
Botulism toxin is one way this is theoretically achievable—it takes a very small quantity to kill a human. So a ‘time bomb’ of a capsule of it, injected painlessly into most humans using nanotechnology, or a virus that edits our genome and inserts the botulism gene and some mechanism to prevent expression for a few months, or similar method. For one thing, botulism toxin is a protein and is probably much larger than it needs to be...
If all of EY’s scenarios require deception, then detection of deception from rogue AI systems seems like a great place to focus on. Is there anyone working on that problem?
https://www.lesswrong.com/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion
Eric Drexler is.