In all the stories I read about an AI dystopia, the solution proposed is to kill it. From Disney to the Lawnmower movie to Rucker’s Postsingular etc. While we know what General Relativity looks like, and so we can develop the story of a civilization which happens to discover it, we still have little clue to what a FAI would look like, and I think we shouldn’t burden a poor writer to discover the theory before writing a novel… From here a writer has two choices: uses FAI (we can imagine how it looks) to solve some other existential risk, or concentrate the UAI existential risk to some subset where the Friendly part is solvable but not obvious. I think I’ll ponder the last track for a while...
In all the stories I read about an AI dystopia, the solution proposed is to kill it. From Disney to the Lawnmower movie to Rucker’s Postsingular etc. While we know what General Relativity looks like, and so we can develop the story of a civilization which happens to discover it, we still have little clue to what a FAI would look like, and I think we shouldn’t burden a poor writer to discover the theory before writing a novel… From here a writer has two choices: uses FAI (we can imagine how it looks) to solve some other existential risk, or concentrate the UAI existential risk to some subset where the Friendly part is solvable but not obvious. I think I’ll ponder the last track for a while...