“if I give the humans heroin, they’ll ask for more heroin; my Boltzmann-rationality estimator module confirms that this means they like heroin, so I can efficiently satisfy their preferences by giving humans heroin”.
is more IRL than CIRL. It doesn’t necessarily assume that the human knows their own utility function and is trying to play a cooperative strategy with the AI that maximizes that same utility function. If I knew that what would really maximize utility is having that second hit of heroin, I’d try to indicate it to the AI I was cooperating with.
Problems with IRL look like “we modeled the human as an agent based on representative observations, and now we’re going to try to maximize the modeled values, and that’s bad.” Problems with CIRL look like “we’re trying to play this cooperative game with the human that involves modeling it as an agent playing the same game, and now we’re going to try to take actions that have really high EV in the game, and that’s bad.”
I think Rohin’s point is that the model of
is more IRL than CIRL. It doesn’t necessarily assume that the human knows their own utility function and is trying to play a cooperative strategy with the AI that maximizes that same utility function. If I knew that what would really maximize utility is having that second hit of heroin, I’d try to indicate it to the AI I was cooperating with.
Problems with IRL look like “we modeled the human as an agent based on representative observations, and now we’re going to try to maximize the modeled values, and that’s bad.” Problems with CIRL look like “we’re trying to play this cooperative game with the human that involves modeling it as an agent playing the same game, and now we’re going to try to take actions that have really high EV in the game, and that’s bad.”
Thanks! Responded here: https://www.lesswrong.com/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a