Nothing particularly new or interesting, as far as I can tell. It tells us that defining a system of artificial ethics in terms of the object-level prescriptions of a natural ethic is unlikely to be productive; but we already knew that. It also tells us that aggregating people’s values is a hard problem and that the best approaches to solving it probably consist of trying to satisfy underlying motivations rather than stated preferences; but we already knew that, too.
This all sounds reasonable to me. Now what happens when you apply the same reasoning to Friendly AI?
Nothing particularly new or interesting, as far as I can tell. It tells us that defining a system of artificial ethics in terms of the object-level prescriptions of a natural ethic is unlikely to be productive; but we already knew that. It also tells us that aggregating people’s values is a hard problem and that the best approaches to solving it probably consist of trying to satisfy underlying motivations rather than stated preferences; but we already knew that, too.