For the preference learning skepticism, does this extend to the research direction (that isn’t yet a research area) of modelling long term preferences/preferences on reflection? This is more along the lines of the “AI-assisted deliberation” direction from ARCHES.
To me it seems like AI alignment that can capture preferences on reflection could be used to find solutions to many of other problems. Though there are good reasons to expect that we’d still want to do other work (because we might need theoretical understanding and okay solutions before AI reaches the point where it can help on research, because we want to do work ourselves to be able to check solutions that AIs reach, etc.)
It also seems like areas like FairML and Computational Social Choice will require preference learning as components—my guess is that people’s exact preferences about fairness won’t have a simple mathematical formulation, and will instead need to be learned. I could buy the position that the necessary progress in preference learning will happen by default because of other incentives.
For the preference learning skepticism, does this extend to the research direction (that isn’t yet a research area) of modelling long term preferences/preferences on reflection? This is more along the lines of the “AI-assisted deliberation” direction from ARCHES.
To me it seems like AI alignment that can capture preferences on reflection could be used to find solutions to many of other problems. Though there are good reasons to expect that we’d still want to do other work (because we might need theoretical understanding and okay solutions before AI reaches the point where it can help on research, because we want to do work ourselves to be able to check solutions that AIs reach, etc.)
It also seems like areas like FairML and Computational Social Choice will require preference learning as components—my guess is that people’s exact preferences about fairness won’t have a simple mathematical formulation, and will instead need to be learned. I could buy the position that the necessary progress in preference learning will happen by default because of other incentives.