I believe there’s an implicit assumption here that it’s possible to create AGI without an understanding of intelligence and motives that would lead one to choose FAI. (Of course, there’s a second question about whether an AI designed to be friendly will actually -be- friendly, but that’s another issue altogether.)
I believe there’s an implicit assumption here that it’s possible to create AGI without an understanding of intelligence and motives that would lead one to choose FAI. (Of course, there’s a second question about whether an AI designed to be friendly will actually -be- friendly, but that’s another issue altogether.)