Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.
I don’t see anything that humans are currently doing that would stop human extinction in this scenario. The goo can’t reach the ISS, and maybe submarines, but current submarines are reliant on outside resources like food.
In these scenaios, the self replication is fast enough that there is only a few days between noticing something wrong, and almost all humans being dead, not enough time to do much engineering. In a contest of fast replicators, the side with a small head-start can vastly outnumber the other. If a sufficiently well designed blue goo is created in advance, then I expect humanity to be fine.
I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.
A modern computer virus is not significantly intelligent. The designers of the virus might have put a lot of thought into searching for security holes, the virus itself is not intelligent. (usually) The designers might know what SQL injection is and how it works, the virus just repeats a particular hard coded string into any textbox it sees.
It might be possible to create a blue goo and nanoweapon defense team so sophisticated that no simplistic hard coded strategy would work. But this does not currently exist, and is only something that would be built if humanity is seriously worried about grey goo. And again, it has to be built in advance of advanced grey goo.
I agree. I think humanity should adopt some sort of grey-goo-resistant shelter and, if/when nanotechnology is advanced enough, create some sort of blue-goo defense system (perhaps it could be designed after-the-fact in the shelter).
The fact that these problems seem tractable, and that -in my estimation- we will achieve dangerous AI before dangerous nanotechnology suggests to me that preventing AI risk should have priority over preventing nanotechnology risks.
I don’t see anything that humans are currently doing that would stop human extinction in this scenario. The goo can’t reach the ISS, and maybe submarines, but current submarines are reliant on outside resources like food.
In these scenaios, the self replication is fast enough that there is only a few days between noticing something wrong, and almost all humans being dead, not enough time to do much engineering. In a contest of fast replicators, the side with a small head-start can vastly outnumber the other. If a sufficiently well designed blue goo is created in advance, then I expect humanity to be fine.
A modern computer virus is not significantly intelligent. The designers of the virus might have put a lot of thought into searching for security holes, the virus itself is not intelligent. (usually) The designers might know what SQL injection is and how it works, the virus just repeats a particular hard coded string into any textbox it sees.
It might be possible to create a blue goo and nanoweapon defense team so sophisticated that no simplistic hard coded strategy would work. But this does not currently exist, and is only something that would be built if humanity is seriously worried about grey goo. And again, it has to be built in advance of advanced grey goo.
I agree. I think humanity should adopt some sort of grey-goo-resistant shelter and, if/when nanotechnology is advanced enough, create some sort of blue-goo defense system (perhaps it could be designed after-the-fact in the shelter).
The fact that these problems seem tractable, and that -in my estimation- we will achieve dangerous AI before dangerous nanotechnology suggests to me that preventing AI risk should have priority over preventing nanotechnology risks.