No. FAI is supposed to implement an extrapolated version of mankind’s combined values, not search for an objectively defined moral code to implement.
Also: Eliezer has argued that even from it’s programmers’ perspective, some elements of a FAI’s moral code (Coherent Extrapolated Volition) will probably look deeply immoral. (But will actually be OK.)
Why does the moral anti-realist think “an extrapolated version of mankind’s combined values” exists or is capable of being created? For the moral realists, the answer is easy—the existence of objective moral facts shows that, in principle, some moral system that all humans could endorse could be discovered/articulated.
As an aside, CEV is a proposed method for finding what an FAI would implement. I think that one could think FAI is possible even if CEV were the wrong track for finding what FAI should do. In should, CEV is not necessarily part of the definition of Friendly.
Well, to assert that “an extrapolated version of mankind’s combined values can be created” doesn’t really assert much, in and of itself… just that some algorithm can be implemented that takes mankind’s values as input and generates a set of values as output. It seems pretty likely that a large number of such algorithms exist.
Of course, what CEV proponents want to say, additionally, is that some of these algorithms are such that their output is guaranteed to be something that humans ought to endorse. (Which is not to say that humans actually would endorse it.)
It’s not even clear to me that moral realists should believe this. That is, even if I posit that objective moral facts exist, it doesn’t follow that they can be derived from any algorithm applied to the contents of human minds.
But I agree with you that it’s still less clear why moral anti-realists should believe it.
No. FAI is supposed to implement an extrapolated version of mankind’s combined values, not search for an objectively defined moral code to implement.
Also: Eliezer has argued that even from it’s programmers’ perspective, some elements of a FAI’s moral code (Coherent Extrapolated Volition) will probably look deeply immoral. (But will actually be OK.)
Why does the moral anti-realist think “an extrapolated version of mankind’s combined values” exists or is capable of being created? For the moral realists, the answer is easy—the existence of objective moral facts shows that, in principle, some moral system that all humans could endorse could be discovered/articulated.
As an aside, CEV is a proposed method for finding what an FAI would implement. I think that one could think FAI is possible even if CEV were the wrong track for finding what FAI should do. In should, CEV is not necessarily part of the definition of Friendly.
Well, to assert that “an extrapolated version of mankind’s combined values can be created” doesn’t really assert much, in and of itself… just that some algorithm can be implemented that takes mankind’s values as input and generates a set of values as output. It seems pretty likely that a large number of such algorithms exist.
Of course, what CEV proponents want to say, additionally, is that some of these algorithms are such that their output is guaranteed to be something that humans ought to endorse. (Which is not to say that humans actually would endorse it.)
It’s not even clear to me that moral realists should believe this. That is, even if I posit that objective moral facts exist, it doesn’t follow that they can be derived from any algorithm applied to the contents of human minds.
But I agree with you that it’s still less clear why moral anti-realists should believe it.