That’s true. I guess I should have clarified that the argument here doesn’t exclude nanotechnology from the category of catastrophic risks (by catastrophic, I mean things like hurricanes which could cause lots of damage but could not eliminate humanity) but rules out nanotechnology as an existential risk independent from AI.
Lots of simple replicators can use up the resources in a specific environment. But in order to present a true existential risk, nanotechnology would have to permanently out-compete humanity for vital resources, which would require outsmarting humanity in some sense.
That’s true. I guess I should have clarified that the argument here doesn’t exclude nanotechnology from the category of catastrophic risks (by catastrophic, I mean things like hurricanes which could cause lots of damage but could not eliminate humanity) but rules out nanotechnology as an existential risk independent from AI.
Lots of simple replicators can use up the resources in a specific environment. But in order to present a true existential risk, nanotechnology would have to permanently out-compete humanity for vital resources, which would require outsmarting humanity in some sense.