The scenarios you present would certainly be catastrophic (and are cause for great concern/research about nanotechnology), but could all of humanity really be consumed by self replicators?
I would argue that even if they were maliciously designed, self replicators would have to outsmart us in some sense in order to become a true existential risk. Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.
If we assume that a very smart and malevolent human is designing this grey goo, I suspect they could make something world ending.
I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.
A parallel scenario occurs when a smart and malevolent human designs an AI (which may or may not choose to self replicate). This post attempts to point out that these two situations are nearly identical, and that the existential risk does not come from self-replication or nanotechnology themselves, but rather from the intelligence of the grey goo. This would mean that we could prevent existential risks from replication by copying over the analogous solution in AI safety: to make sure that the replicators decision-making remains aligned with human interests, we can apply alignment techniques; to handle malicious replicators, we can apply the same plan we would use to handle malicious AI’s.
Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.
I don’t see anything that humans are currently doing that would stop human extinction in this scenario. The goo can’t reach the ISS, and maybe submarines, but current submarines are reliant on outside resources like food.
In these scenaios, the self replication is fast enough that there is only a few days between noticing something wrong, and almost all humans being dead, not enough time to do much engineering. In a contest of fast replicators, the side with a small head-start can vastly outnumber the other. If a sufficiently well designed blue goo is created in advance, then I expect humanity to be fine.
I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.
A modern computer virus is not significantly intelligent. The designers of the virus might have put a lot of thought into searching for security holes, the virus itself is not intelligent. (usually) The designers might know what SQL injection is and how it works, the virus just repeats a particular hard coded string into any textbox it sees.
It might be possible to create a blue goo and nanoweapon defense team so sophisticated that no simplistic hard coded strategy would work. But this does not currently exist, and is only something that would be built if humanity is seriously worried about grey goo. And again, it has to be built in advance of advanced grey goo.
I agree. I think humanity should adopt some sort of grey-goo-resistant shelter and, if/when nanotechnology is advanced enough, create some sort of blue-goo defense system (perhaps it could be designed after-the-fact in the shelter).
The fact that these problems seem tractable, and that -in my estimation- we will achieve dangerous AI before dangerous nanotechnology suggests to me that preventing AI risk should have priority over preventing nanotechnology risks.
These are good points.
The scenarios you present would certainly be catastrophic (and are cause for great concern/research about nanotechnology), but could all of humanity really be consumed by self replicators?
I would argue that even if they were maliciously designed, self replicators would have to outsmart us in some sense in order to become a true existential risk. Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.
I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.
A parallel scenario occurs when a smart and malevolent human designs an AI (which may or may not choose to self replicate). This post attempts to point out that these two situations are nearly identical, and that the existential risk does not come from self-replication or nanotechnology themselves, but rather from the intelligence of the grey goo. This would mean that we could prevent existential risks from replication by copying over the analogous solution in AI safety: to make sure that the replicators decision-making remains aligned with human interests, we can apply alignment techniques; to handle malicious replicators, we can apply the same plan we would use to handle malicious AI’s.
I don’t see anything that humans are currently doing that would stop human extinction in this scenario. The goo can’t reach the ISS, and maybe submarines, but current submarines are reliant on outside resources like food.
In these scenaios, the self replication is fast enough that there is only a few days between noticing something wrong, and almost all humans being dead, not enough time to do much engineering. In a contest of fast replicators, the side with a small head-start can vastly outnumber the other. If a sufficiently well designed blue goo is created in advance, then I expect humanity to be fine.
A modern computer virus is not significantly intelligent. The designers of the virus might have put a lot of thought into searching for security holes, the virus itself is not intelligent. (usually) The designers might know what SQL injection is and how it works, the virus just repeats a particular hard coded string into any textbox it sees.
It might be possible to create a blue goo and nanoweapon defense team so sophisticated that no simplistic hard coded strategy would work. But this does not currently exist, and is only something that would be built if humanity is seriously worried about grey goo. And again, it has to be built in advance of advanced grey goo.
I agree. I think humanity should adopt some sort of grey-goo-resistant shelter and, if/when nanotechnology is advanced enough, create some sort of blue-goo defense system (perhaps it could be designed after-the-fact in the shelter).
The fact that these problems seem tractable, and that -in my estimation- we will achieve dangerous AI before dangerous nanotechnology suggests to me that preventing AI risk should have priority over preventing nanotechnology risks.