I think of FAI as being like gorillas trying to invent a human—a human which will be safe for gorillas, but I may be unduly pessimistic.
Leave out “artificial”—what would constitute a “human-friendly intelligence”? Humans don’t. Even at our present intelligence we’re a danger to ourselves.
I’m not sure “human-friendly intelligence” is a coherent concept, in terms of being sufficiently well-defined (as yet) to say things about. The same way “God” isn’t really a coherent concept.
Leave out “artificial”—what would constitute a “human-friendly intelligence”? Humans don’t. Even at our present intelligence we’re a danger to ourselves.
I’m not sure “human-friendly intelligence” is a coherent concept, in terms of being sufficiently well-defined (as yet) to say things about. The same way “God” isn’t really a coherent concept.