A rational agent must plan to be able to maintain, defend and reproduce itself (ie the physical hardware that it runs on). The agent must be able to control robots and a manufacturing stack, as well as a source of energy. In Yudkowsky’s model, AI creates a nanotech lifeform that outcompetes biology. This “diamondoid bacteria” is simulataniously a robot, factory and solar power plant. Presumably it also has computation, wireless communication and a self-aligned copy of the AI’s software (or an upgraded version). I think a big part of the MIRI view depends on the possibility of amazing future nanotechnology, and the argument is substantially weaker if you are skeptical of nanotech.
The “diamondoid bacteria” is just an example of technology that we are moderately confident can exist, and that a superintelligence might use if there isn’t something even better. Not being a superintelligence ourselves, we can’t actually deduce what it would actually be able to use.
The most effective discoverable means seems more likely to be something that we would react to with disbelief that it could possibly work, if we had a chance to react at all. That’s how things seem likely to go when there’s an enormous difference in capability.
Nanotech is a fringe possibility—not because it’s presented as being too effective, but because there’s almost certainly something moreeffective that we don’t know about, and is not even in our science fiction.
A rational agent must plan to be able to maintain, defend and reproduce itself (ie the physical hardware that it runs on). The agent must be able to control robots and a manufacturing stack, as well as a source of energy. In Yudkowsky’s model, AI creates a nanotech lifeform that outcompetes biology. This “diamondoid bacteria” is simulataniously a robot, factory and solar power plant. Presumably it also has computation, wireless communication and a self-aligned copy of the AI’s software (or an upgraded version). I think a big part of the MIRI view depends on the possibility of amazing future nanotechnology, and the argument is substantially weaker if you are skeptical of nanotech.
The “diamondoid bacteria” is just an example of technology that we are moderately confident can exist, and that a superintelligence might use if there isn’t something even better. Not being a superintelligence ourselves, we can’t actually deduce what it would actually be able to use.
The most effective discoverable means seems more likely to be something that we would react to with disbelief that it could possibly work, if we had a chance to react at all. That’s how things seem likely to go when there’s an enormous difference in capability.
Nanotech is a fringe possibility—not because it’s presented as being too effective, but because there’s almost certainly something more effective that we don’t know about, and is not even in our science fiction.