The short answer is that yes, they are related and basically about the same thing. However the approaches of researchers vary a lot.
Relevant considerations that come to mind:
The extent to which values/preferences are legible
The extent to which they are discoverable
The extent to which they are hidden variables
The extent to which they are normative
How important immediate implementability is
How important extreme optimization is
How important safety concerns are
The result is that I think there is something of a divide between safety-focused researchers and capabilities-focused researchers in this area due to different assumptions and that makes each others work not very interesting/relevant to the other cluster.
Interesting points. The distinctions you mention could equally apply in distinguishing narrow from ambitious value learning. In fact, I think preference learning is pretty much the same as narrow value learning. Thus, could it be that ambitious value learning research may not be very interested in preference learning to a similar extent in which they are not interested in narrow value learning?
“How important safety concerns” is certainly right, but the story of science teaches us that taking something from a domain with different concerns to another domain has often proven extremely useful.
The short answer is that yes, they are related and basically about the same thing. However the approaches of researchers vary a lot.
Relevant considerations that come to mind:
The extent to which values/preferences are legible
The extent to which they are discoverable
The extent to which they are hidden variables
The extent to which they are normative
How important immediate implementability is
How important extreme optimization is
How important safety concerns are
The result is that I think there is something of a divide between safety-focused researchers and capabilities-focused researchers in this area due to different assumptions and that makes each others work not very interesting/relevant to the other cluster.
Interesting points. The distinctions you mention could equally apply in distinguishing narrow from ambitious value learning. In fact, I think preference learning is pretty much the same as narrow value learning. Thus, could it be that ambitious value learning research may not be very interested in preference learning to a similar extent in which they are not interested in narrow value learning?
“How important safety concerns” is certainly right, but the story of science teaches us that taking something from a domain with different concerns to another domain has often proven extremely useful.