AI safety as Grey Goo in disguise. First, a rather obvious observation: while the Terminator movie pretends to display AI risk, it actually plays with fears of nuclear war – remember that explosion which destroys children’s playground?
EY came to the realisation of AI risk after a period than he had worried more about grey goo (circa 1999) – unstoppable replication of nanorobots which will eat all biological matter, – as was revealed in a recent post about possible failures of EY’s predictions. While his focus moved from grey goo to AI, the description of the catastrophe has not changed: nanorobots will eat biological matter, however, now not just for replication but for production of paperclips. This grey goo legacy is still a part of EY narrative about AI risk as we see from his recent post about AI lethalities.
However, if we remove the fear of grey goo, we could see that AI which experiences hard takeoff is less dangerous than a slower AI. If AI gets superintelligence and super capabilities from the start, the value of human atoms becomes minuscule, and AI may preserve humans as a bargain against other possible or future AIs. If AI ascending is slow, it has to compete with humans for a period of time and this could take a form of war. Humans have killed Neanderthals, but not ants.
It’s worth exploring exactly which resources are under competition. Humans have killed orders of magnitude more ants than Neanderthals, but the overlap in resources is much less complete for ants, so they’ve survived.
Grey-goo-like scenarios are scary because resource contention is 100% - there is nothing humans want/need that the goo doesn’t want/need, in ways that are exclusive to human existence. We just don’t know how much resource-use overlap there will be between AI and humans (or some subset of humans), and fast-takeoff is a little more worrisome because there’s far less opportunity to find areas of compromise (where the AI values human cooperation enough to leave some resources to us).
AI safety as Grey Goo in disguise.
First, a rather obvious observation: while the Terminator movie pretends to display AI risk, it actually plays with fears of nuclear war – remember that explosion which destroys children’s playground?
EY came to the realisation of AI risk after a period than he had worried more about grey goo (circa 1999) – unstoppable replication of nanorobots which will eat all biological matter, – as was revealed in a recent post about possible failures of EY’s predictions. While his focus moved from grey goo to AI, the description of the catastrophe has not changed: nanorobots will eat biological matter, however, now not just for replication but for production of paperclips. This grey goo legacy is still a part of EY narrative about AI risk as we see from his recent post about AI lethalities.
However, if we remove the fear of grey goo, we could see that AI which experiences hard takeoff is less dangerous than a slower AI. If AI gets superintelligence and super capabilities from the start, the value of human atoms becomes minuscule, and AI may preserve humans as a bargain against other possible or future AIs. If AI ascending is slow, it has to compete with humans for a period of time and this could take a form of war. Humans have killed Neanderthals, but not ants.
It’s worth exploring exactly which resources are under competition. Humans have killed orders of magnitude more ants than Neanderthals, but the overlap in resources is much less complete for ants, so they’ve survived.
Grey-goo-like scenarios are scary because resource contention is 100% - there is nothing humans want/need that the goo doesn’t want/need, in ways that are exclusive to human existence. We just don’t know how much resource-use overlap there will be between AI and humans (or some subset of humans), and fast-takeoff is a little more worrisome because there’s far less opportunity to find areas of compromise (where the AI values human cooperation enough to leave some resources to us).