Your post reminded me of Paul Christiano’s approval-directed agents which was also about trying to find an alternative to goal-directed agents. Looking at it again, it actually sounds a lot like applying imitation learning to humans (except imitating a speeded-up human):
It seems like approval direction allows for creative actions that the human operator approves of but would not have thought of doing themselves. Not sure if imitation learning does this.
That’s a good question. It looks like imitation learning actually covers a number of ML techniques (see this) none of which exactly matches approval-directed agents. But the category seems broad enough that I think approval-directed agents can be considered to be a form of imitation learning. In particular, IRL is considered a form of imitation learning and IRL would also be able to perform actions that the human would not have thought of doing themselves.
A little bit of nuance: IRL is considered to be a form of imitation learning because in many cases the inferred reward in IRL is only meant to reproduce the human’s performance and isn’t expected to generalize outside of the training distribution.
There are versions of IRL which are meant to go beyond imitation. For example, adversarial IRL was trying to infer a reward that would generalize to new environments, in which case it would be doing something more than imitation.
It seems like approval direction allows for creative actions that the human operator approves of but would not have thought of doing themselves. Not sure if imitation learning does this.
That’s a good question. It looks like imitation learning actually covers a number of ML techniques (see this) none of which exactly matches approval-directed agents. But the category seems broad enough that I think approval-directed agents can be considered to be a form of imitation learning. In particular, IRL is considered a form of imitation learning and IRL would also be able to perform actions that the human would not have thought of doing themselves.
^ Yes to all of this.
A little bit of nuance: IRL is considered to be a form of imitation learning because in many cases the inferred reward in IRL is only meant to reproduce the human’s performance and isn’t expected to generalize outside of the training distribution.
There are versions of IRL which are meant to go beyond imitation. For example, adversarial IRL was trying to infer a reward that would generalize to new environments, in which case it would be doing something more than imitation.