I think that before something like that becomes possible, some other less sophisticated intelligence will have already been employed as a tool to do, or solve, something that destroys pretty much everything.
DOOM!
An AI that can solve bio or nanotech problems should be much easier to design than one that can destroy the world as a side-effect of unbounded self-improvement.
Probably true.
And only the latter category is subject to friendliness research.
Hmm. Possibly some research avenues are more promising than others—but this sounds a bit broad.
DOOM!
Probably true.
Hmm. Possibly some research avenues are more promising than others—but this sounds a bit broad.