My informal critique of CIRL is that it assume two untrue facts: that H knows θ (ie knows their own values) and that
H is perfectly rational (or noisly rational in a specific way).
This seems like a valid critique. But do you see it as a deal breaker? In my mind, these are pretty minor failings of CIRL. Because if a person is being irrational and/or can’t figure out what they want, then how can we expect the AI to? (Or is there some alternative scheme which handles these cases better than CIRL?)
This seems like a valid critique. But do you see it as a deal breaker? In my mind, these are pretty minor failings of CIRL. Because if a person is being irrational and/or can’t figure out what they want, then how can we expect the AI to? (Or is there some alternative scheme which handles these cases better than CIRL?)
(Update: Stuart replied to this comment on LessWrong: https://www.lesswrong.com/posts/YHQZHbhx7afHJ5Esw/biased-reward-learning-in-cirl?commentId=mtvrzJgNz5ngMMCvk)
I see this issue as being a fundamental problem:
https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into
https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1
https://www.lesswrong.com/posts/3e6pmovj6EJ729M2i/general-alignment-plus-human-values-or-alignment-via-human