Given that the morality we want to impose on a FAI is kind of incoherent, maybe we should get an AI to make sense of it first?
Given that the morality we want to impose on a FAI is kind of incoherent, maybe we should get an AI to make sense of it first?