My main counterarguments to such “disassemble us for atoms” common arguments, is that they hinge on the idea that extremely efficient dry nanotechnology for this will ever be possible. Some problems, like laws of thermodynamics, speed of light, etc simply cannot be solved by throwing more Intelligence at it, they are likely to be “hard capped” by the basic principles of physical reality.
My completely uneducated guess is that the “supertech” that AI would supposedly use to wipe us out, fall into one of the 3 tiers:
Pipedreams (impossible, or at least unachievable from this starting point): superluminal technologies, dry nanotech, efficient exponential swarms, untapped computation,
Flying Pigs (borderline possible, if you tweak the definitions and move some goalposts, but likely pointless): efficient wet nanotech, von neumanns, meaningful quantum computation.
Wunderwaffe (possible, but only really useful in the troperific scenario when AI wants to wipe us out for evil’s sake, disregarding more efficient routes): swarm robotics, nuclear holocaust, bioweapons, social engineering via the Internet.
My intuition is that AI apocalypse is not going to ever involve realized Pipedreams or Flying Pigs. The worst case scenario is a non-sapient and relatively dumb AI going all PeterTodd on us, using completely mundane technologies and social hacking, and damaging us badly without wiping us out. Complete destruction of humanity is unlikely, more likely scenario is a long, slow and destructive turf war between humans and barely superhuman AI, that cripples our civilization and AI’s exponential potency plans.
The Green Goo scenario as presented is plausible in principle, but not with its timeline. There is no plausible way for a biological system, especially one based on plants, to spread that fast. Even if we ignore issues like physical obstacles, rivers, mountains, roads, walls, oceans, bad weather, pests, natural diseases, natural fires, snow, internal mutations etc, things that on their own would slow down and disorganize the Green Goo, there is also the issue of those pesky humans with heir chainsaws, herbicides, and napalm. Worst case scenario, GG would take decades, even centuries to do us irreparable harm, and by that time we would either beat it, or nuke it to glass, or fuck off to Mars where it can’t chase us.
Green Goo Scenario would be absolutely devastating, and very, very, very bad, but not even close to apocalyptic. I find it extremely unlikely that any kind of Green Goo could beat Earth’s ecosystems passive defenses in any kind of timeline that matters, let alone active offense from technologically advanced humans. Earth already has a fast spreading malevolent biological intelligence with the means to sterilize continents, its called Homo Sapiens.
There is no plausible way for a biological system, especially one based on plants, to spread that fast.
We are talking about a malevolent AI that presumably has a fair bit of tech infrastructure. So a plane that sprinkles green goo seeds is absolutely a thing the AI can do. Or just posting the goo, and tricking someone into sprinkling it on the other end. The green goo doesn’t need decades to spread around the world. It travels by airmail. As is having green goo that grows itself into bird shapes. As is a bunch of bioweapon pandemics. (The standard long asymptomatic period, high virulence and 100% fatality rate. Oh, and a bunch of different versions to make immunization/vaccines not work) It can also design highly effective diseases targeting all human crops.
My main counterarguments to such “disassemble us for atoms” common arguments, is that they hinge on the idea that extremely efficient dry nanotechnology for this will ever be possible. Some problems, like laws of thermodynamics, speed of light, etc simply cannot be solved by throwing more Intelligence at it, they are likely to be “hard capped” by the basic principles of physical reality.
My completely uneducated guess is that the “supertech” that AI would supposedly use to wipe us out, fall into one of the 3 tiers:
Pipedreams (impossible, or at least unachievable from this starting point): superluminal technologies, dry nanotech, efficient exponential swarms, untapped computation,
Flying Pigs (borderline possible, if you tweak the definitions and move some goalposts, but likely pointless): efficient wet nanotech, von neumanns, meaningful quantum computation.
Wunderwaffe (possible, but only really useful in the troperific scenario when AI wants to wipe us out for evil’s sake, disregarding more efficient routes): swarm robotics, nuclear holocaust, bioweapons, social engineering via the Internet.
My intuition is that AI apocalypse is not going to ever involve realized Pipedreams or Flying Pigs. The worst case scenario is a non-sapient and relatively dumb AI going all PeterTodd on us, using completely mundane technologies and social hacking, and damaging us badly without wiping us out. Complete destruction of humanity is unlikely, more likely scenario is a long, slow and destructive turf war between humans and barely superhuman AI, that cripples our civilization and AI’s exponential potency plans.
this post from yesterday agrees with you: https://www.lesswrong.com/posts/FijbeqdovkgAusGgz/grey-goo-is-unlikely
but this reply to that one disagrees vigorously: https://www.lesswrong.com/posts/ibaCBwfnehYestpi5/green-goo-is-plausible
The Green Goo scenario as presented is plausible in principle, but not with its timeline. There is no plausible way for a biological system, especially one based on plants, to spread that fast. Even if we ignore issues like physical obstacles, rivers, mountains, roads, walls, oceans, bad weather, pests, natural diseases, natural fires, snow, internal mutations etc, things that on their own would slow down and disorganize the Green Goo, there is also the issue of those pesky humans with heir chainsaws, herbicides, and napalm. Worst case scenario, GG would take decades, even centuries to do us irreparable harm, and by that time we would either beat it, or nuke it to glass, or fuck off to Mars where it can’t chase us.
Green Goo Scenario would be absolutely devastating, and very, very, very bad, but not even close to apocalyptic. I find it extremely unlikely that any kind of Green Goo could beat Earth’s ecosystems passive defenses in any kind of timeline that matters, let alone active offense from technologically advanced humans. Earth already has a fast spreading malevolent biological intelligence with the means to sterilize continents, its called Homo Sapiens.
We are talking about a malevolent AI that presumably has a fair bit of tech infrastructure. So a plane that sprinkles green goo seeds is absolutely a thing the AI can do. Or just posting the goo, and tricking someone into sprinkling it on the other end. The green goo doesn’t need decades to spread around the world. It travels by airmail. As is having green goo that grows itself into bird shapes. As is a bunch of bioweapon pandemics. (The standard long asymptomatic period, high virulence and 100% fatality rate. Oh, and a bunch of different versions to make immunization/vaccines not work) It can also design highly effective diseases targeting all human crops.