But, since the FAI’s top-level goal is just to preserve human top-level goals, it would be pointless to make a lot of fuss making sure the FAI held its own top-level goals constant, if you’re going to “correct” human goals first.)
Well, part of the sleight-of-hand here is that the FAI preserves the goals we would have if we were wiser, better people.
If changing top-level goals is allowed in this instance, or this top-level goal is considered “not really a top-level goal”, I would become alarmed and demand an explanation of how a FAI distinguishes such pseudo-top-level-goals from real top-level goals.
Your alarm might not be relevant data, but an explanation might be possible from the FAI / the person who proved it Friendly.
Well, part of the sleight-of-hand here is that the FAI preserves the goals we would have if we were wiser, better people.
Your alarm might not be relevant data, but an explanation might be possible from the FAI / the person who proved it Friendly.