Is AI Foom possible if even the godlike superintelligence cannot create gray goo? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.
Foom is more about growth in intelligence, which could be possible with existing computing resources and research into faster computers. Even if gray goo is impossible, once AI is much smarter than humans, it can manipulate humans so that most of the world’s productive capacity ends up under the AI’s control.
It seems that this option leaves more chances for the victory for humanity than the gray goo scenario. And even if we screw up for the first time, it can be fixed. Of course, this does not eliminate the need for AI alignment efforts anyway.
Yeah, if gray goo is impossible, the AI can’t use that particular insta-win move. Though I think if the AI is smarter than humans, it can find other moves that will let it win slower but pretty much as surely.
Not that it’s an essential part of any particular argument, but my understanding was that literal grey goo (independently operating nanomachines breaking down inert matter and converting the whole Earth’s mass in a matter of hours) is probably ruled out by the laws of thermodynamics, because there is no nanoscale way to dissipate heat or generate enough energy to power transformations millions of times faster than biological processes. It also seems like nanomachines would be very vulnerable to heat or radiation because of the square-cube law.
However, less extreme replicators are clearly physically possible because cell division and ribosomes exist. The fact that a literal grey goo scenario is probably ruled out by basic physics does not imply that the ultimate limits for non-biological replicators are close to those for biological replication (which are themselves pretty impressive). Assuming that all small-scale replicators can’t go much faster than Bamboo without a specific reason would be the harmless supernova fallacy. For a scenario that isn’t close to grey goo, but is still much scarier than anything biology can do, see e.g. this.
Then AI will have to become really smarter than very large groups of people who will try to control the world. And people by that time will surely be ready more than now. I am sure that the laws of physics allow the quick destruction of humanity, but it seems to me that without a swarm of self-reproducing nanorobots, the probability of our survival after the creation of the first AGI exceeds 50%.
Is AI Foom possible if even the godlike superintelligence cannot create gray goo? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.
Foom is more about growth in intelligence, which could be possible with existing computing resources and research into faster computers. Even if gray goo is impossible, once AI is much smarter than humans, it can manipulate humans so that most of the world’s productive capacity ends up under the AI’s control.
It seems that this option leaves more chances for the victory for humanity than the gray goo scenario. And even if we screw up for the first time, it can be fixed. Of course, this does not eliminate the need for AI alignment efforts anyway.
Yeah, if gray goo is impossible, the AI can’t use that particular insta-win move. Though I think if the AI is smarter than humans, it can find other moves that will let it win slower but pretty much as surely.
Not that it’s an essential part of any particular argument, but my understanding was that literal grey goo (independently operating nanomachines breaking down inert matter and converting the whole Earth’s mass in a matter of hours) is probably ruled out by the laws of thermodynamics, because there is no nanoscale way to dissipate heat or generate enough energy to power transformations millions of times faster than biological processes. It also seems like nanomachines would be very vulnerable to heat or radiation because of the square-cube law.
However, less extreme replicators are clearly physically possible because cell division and ribosomes exist. The fact that a literal grey goo scenario is probably ruled out by basic physics does not imply that the ultimate limits for non-biological replicators are close to those for biological replication (which are themselves pretty impressive). Assuming that all small-scale replicators can’t go much faster than Bamboo without a specific reason would be the harmless supernova fallacy. For a scenario that isn’t close to grey goo, but is still much scarier than anything biology can do, see e.g. this.
Then AI will have to become really smarter than very large groups of people who will try to control the world. And people by that time will surely be ready more than now. I am sure that the laws of physics allow the quick destruction of humanity, but it seems to me that without a swarm of self-reproducing nanorobots, the probability of our survival after the creation of the first AGI exceeds 50%.