We’re going to build this “all-powerful superintelligence”, and the problem of FAI is to make it bow down to its human overlords—waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.
If such a thing as AGI-gone-wrong-turning-the-entire-light-cone-into-paperclips were possible, or probable, it’s overwhelmingly likely that we would already some aliens’ version of a paperclip by now.
Accidents happen. CFAI 3.2.6: The Riemann Hypothesis Catastrophe CFAI 3.4: Why structure matters Comment by Michael Vassar The Hidden Complexity of Wishes Qualitative Strategies of Friendliness (...and many more)
You’d actually prefer it wipe us out, or marginalize us? Hmph. CFAI: Beyond the adversarial attitude Besides, an unFriendly AI isn’t necessarily going to do anything more interesting or worthwhile than paperclipping. Nick Bostrom: The Future of Human Evolution Michael Wilson: Normative Reasoning: A Siren Song? The Design Space of Minds-in-General Anthropomorphic Optimism Not if aliens are extremely rare.