The following is one (simplistic) model which might be a useful starting point.
Consider a human and a robot playing a stochastic game like in CIRL. Suppose that each of them is an oracle machine plugged into a reflective oracle, like in the recent paper of Jan, Jessica and Benya. Let the robot have the following prior over the program implemented by the human. The human implements a random program (i.e. a random string of bits for some prefix-free universal Oracle machine) conditional on this program being asymptotically optimal in mean for the class of all robot policies that avoid producing some set of “manipulative action sequences.” Here, “manipulative sequences” can be any set S of action sequences s.t.∑x∈Sn−|x|<ϵ where |x| is the length of the action sequence x, n is the number of possible actions and ϵ is a parameter on which the prior depends.
Interesting. How could we formalise that?
The following is one (simplistic) model which might be a useful starting point.
Consider a human and a robot playing a stochastic game like in CIRL. Suppose that each of them is an oracle machine plugged into a reflective oracle, like in the recent paper of Jan, Jessica and Benya. Let the robot have the following prior over the program implemented by the human. The human implements a random program (i.e. a random string of bits for some prefix-free universal Oracle machine) conditional on this program being asymptotically optimal in mean for the class of all robot policies that avoid producing some set of “manipulative action sequences.” Here, “manipulative sequences” can be any set S of action sequences s.t.∑x∈Sn−|x|<ϵ where |x| is the length of the action sequence x, n is the number of possible actions and ϵ is a parameter on which the prior depends.