(3) could an AI that is developing nanotech without paying attention to the full range of consequences accidentally develop a form of nanotech that is devastating to humanity
(Imagine if e.g. there is some nanotech that does something useful but also creates long-lasting poisonous pollution as a side-effect, for instance.)
I.e. is it sufficient safety that the AI isn’t trying to kill us with nanotech? Or must it also be trying to not kill us?
I’d also be interested in:
(3) could an AI that is developing nanotech without paying attention to the full range of consequences accidentally develop a form of nanotech that is devastating to humanity
(Imagine if e.g. there is some nanotech that does something useful but also creates long-lasting poisonous pollution as a side-effect, for instance.)
I.e. is it sufficient safety that the AI isn’t trying to kill us with nanotech? Or must it also be trying to not kill us?