There is no “Omega” so why are you wasting time on this question?
In the future, the FAI we build may well encounter the “F”AI of another civilization. When it does, if FAI determines that “F”AI can predict FAI’s decisions (regardless of vice versa), we want FAI to make the right decisions.
There is no “Omega” so why are you wasting time on this question?
In the future, the FAI we build may well encounter the “F”AI of another civilization. When it does, if FAI determines that “F”AI can predict FAI’s decisions (regardless of vice versa), we want FAI to make the right decisions.