@ Roland: “Bootstrap the FAI by first building a neutral obedient AI(OAI) that is constrained in such a way that it doesn’t act besides giving answers to questions”
Yes, I’ve had the same idea, or almost the same idea. I call this the “Artificial Philosophy paradigm”—the idea that if you could build a very intelligent AI, then you could give it the goal of answering your questions, subject to the constraint that it is not allowed to influence the world except through talking to you. You would probably want to start by feeding this AI a large amount of “background data” about human life [videofeeds from ordinary people, transcripts of people’s diaries, interviews with ordinary folks] and ask it to get into the same moral frame of reference as we are.
@ Stuart Armstrong: “If you’re smart enough, you could rule the world very easily just by giving answers to questions.”
+10 points for spotting the giant cheesecake fallacy in this criticism. This AI has no desire to rule the world.
@ Roland: “Bootstrap the FAI by first building a neutral obedient AI(OAI) that is constrained in such a way that it doesn’t act besides giving answers to questions”
Yes, I’ve had the same idea, or almost the same idea. I call this the “Artificial Philosophy paradigm”—the idea that if you could build a very intelligent AI, then you could give it the goal of answering your questions, subject to the constraint that it is not allowed to influence the world except through talking to you. You would probably want to start by feeding this AI a large amount of “background data” about human life [videofeeds from ordinary people, transcripts of people’s diaries, interviews with ordinary folks] and ask it to get into the same moral frame of reference as we are.
@ Stuart Armstrong: “If you’re smart enough, you could rule the world very easily just by giving answers to questions.”
+10 points for spotting the giant cheesecake fallacy in this criticism. This AI has no desire to rule the world.