The key point is not that the AI knows what is or isn’t “rigging”, or that the AI “knows what a bias is”. The key point is that in a CIRL game, by construction there is a true (unknown) reward function, and thus an optimal policy must be viewable as being Bayesian about the reward function, and in particular its actions must be consistent with conservation of expected evidence about the reward function; anything which “rigs” the “learning process” does not satisfy this property and so can’t be optimal.
You might reasonably ask where the magic happens. The CIRL game that you choose would have to commit to some connection between rewards and behavior. It could be that in one episode the human wants heroin (but doesn’t know it) and in another episode the human doesn’t want heroin (this depends on the prior over rewards). However, it could never be the case that in a single episode (where the reward must be fixed) the human doesn’t want heroin, and then later in the same episode the human does want heroin. Perhaps in the real world this can happen; that would make this policy suboptimal in the real world. (What it does then is unclear since it depends on how the policy generalizes out of distribution.)
If this doesn’t clarify it, I’ll probably table this discussion until publishing an upcoming paper on CIRL games (where it will probably be renamed to assistance games).
EDIT: Perhaps another way to put this: I agree that if you train an AI system to act such that it maximizes the expected reward under the posterior inferred by a fixed update rule looking at the AI system’s actions and resulting states, the AI will tend to gain reward by choosing actions which when plugged into the update rule lead to a posterior that is “easy to maximize”. This seems like training the controller but not training the estimator, and so the controller learns information about the world that allows it to “trick” the estimator into updating in a particular direction (something that would be disallowed by the rules of probability applied to a unified Bayesian agent, and is only possible here because either a) the estimator is uncalibrated or b) the controller learns information that the estimator doesn’t know).
Instead, you should train an AI system such that it maximizes the expected reward it gets under the prior; this is what CIRL / assistance games do. This is kinda sorta like training both the “estimator” and the “controller” simultaneously, and so the controller can’t gain any information that the estimator doesn’t have (at least at optimality).
The key point is not that the AI knows what is or isn’t “rigging”, or that the AI “knows what a bias is”. The key point is that in a CIRL game, by construction there is a true (unknown) reward function, and thus an optimal policy must be viewable as being Bayesian about the reward function, and in particular its actions must be consistent with conservation of expected evidence about the reward function; anything which “rigs” the “learning process” does not satisfy this property and so can’t be optimal.
You might reasonably ask where the magic happens. The CIRL game that you choose would have to commit to some connection between rewards and behavior. It could be that in one episode the human wants heroin (but doesn’t know it) and in another episode the human doesn’t want heroin (this depends on the prior over rewards). However, it could never be the case that in a single episode (where the reward must be fixed) the human doesn’t want heroin, and then later in the same episode the human does want heroin. Perhaps in the real world this can happen; that would make this policy suboptimal in the real world. (What it does then is unclear since it depends on how the policy generalizes out of distribution.)
If this doesn’t clarify it, I’ll probably table this discussion until publishing an upcoming paper on CIRL games (where it will probably be renamed to assistance games).
EDIT: Perhaps another way to put this: I agree that if you train an AI system to act such that it maximizes the expected reward under the posterior inferred by a fixed update rule looking at the AI system’s actions and resulting states, the AI will tend to gain reward by choosing actions which when plugged into the update rule lead to a posterior that is “easy to maximize”. This seems like training the controller but not training the estimator, and so the controller learns information about the world that allows it to “trick” the estimator into updating in a particular direction (something that would be disallowed by the rules of probability applied to a unified Bayesian agent, and is only possible here because either a) the estimator is uncalibrated or b) the controller learns information that the estimator doesn’t know).
Instead, you should train an AI system such that it maximizes the expected reward it gets under the prior; this is what CIRL / assistance games do. This is kinda sorta like training both the “estimator” and the “controller” simultaneously, and so the controller can’t gain any information that the estimator doesn’t have (at least at optimality).
Thanks! Responded here: https://www.lesswrong.com/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a