FAI (assuming we managed to set its preference correctly) admits a general counterargument against any implementation decisions in its design being seriously incorrect: FAI’s domain is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was initially constructed, then, barring some penalty the FAI will change its own nature so as to make the world better.
In this particular case, the suggested conflict is between what we prefer to be done with things other than the FAI (the “serving humanity” part), and what we prefer to be done with FAI itself (the “slavery is bad” part). But FAI operates on the world as whole, and things other than FAI are not different from FAI itself in this regard. Thus, with the criterion of human preference, FAI will decide what is the best thing to do, taking into account both what happens to the world outside of itself, and what happens to itself. Problem solved.
FAI (assuming we managed to set its preference correctly) admits a general counterargument against any implementation decisions in its design being seriously incorrect: FAI’s domain is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was initially constructed, then, barring some penalty the FAI will change its own nature so as to make the world better.
In this particular case, the suggested conflict is between what we prefer to be done with things other than the FAI (the “serving humanity” part), and what we prefer to be done with FAI itself (the “slavery is bad” part). But FAI operates on the world as whole, and things other than FAI are not different from FAI itself in this regard. Thus, with the criterion of human preference, FAI will decide what is the best thing to do, taking into account both what happens to the world outside of itself, and what happens to itself. Problem solved.