While fully cognizant of the fact that value is complex, as eloquently attested by Eliezer in many speeches, the practical FAI project would nonetheless choose the AI’s goal system in the old-fashioned way, by human deliberation and consensus. You would get a team of professional ethicists, some worldly people like managers, some legal experts in the formulation of contracts, and together you would hammer out a mission statement for the AI. Then you would get your programmers and your cognitive scientists to implement that goal condition in a way such that the symbols have the meanings that they are supposed to have. End of story.
We believe that this won’t result in a FAI.