Green goo doesn’t need all that (see: Covid and other plagues). Why would grey goo? Ok, Covid isn’t transforming everything into more of itself, but it’s doing enough of that to cause serious harm.
That’s true. I guess I should have clarified that the argument here doesn’t exclude nanotechnology from the category of catastrophic risks (by catastrophic, I mean things like hurricanes which could cause lots of damage but could not eliminate humanity) but rules out nanotechnology as an existential risk independent from AI.
Lots of simple replicators can use up the resources in a specific environment. But in order to present a true existential risk, nanotechnology would have to permanently out-compete humanity for vital resources, which would require outsmarting humanity in some sense.
Green goo doesn’t need all that (see: Covid and other plagues). Why would grey goo? Ok, Covid isn’t transforming everything into more of itself, but it’s doing enough of that to cause serious harm.
That’s true. I guess I should have clarified that the argument here doesn’t exclude nanotechnology from the category of catastrophic risks (by catastrophic, I mean things like hurricanes which could cause lots of damage but could not eliminate humanity) but rules out nanotechnology as an existential risk independent from AI.
Lots of simple replicators can use up the resources in a specific environment. But in order to present a true existential risk, nanotechnology would have to permanently out-compete humanity for vital resources, which would require outsmarting humanity in some sense.