let’s view the above process not from the vantage point of the overall training loop but from the perspective of the model itself. For the purposes of demonstration, let’s assume the model is a conscious and coherent entity. From it’s perspective, the above process looks like:
Waking up with no memories in an environment.
Taking a bunch of actions.
Suddenly falling unconscious.
Waking up with no memories in an environment.
Taking a bunch of actions.
and so on.....
The model never “sees” the reward. Each time it wakes up in an environment, its cognition has been altered slightly such that it is more likely to take certain actions than it was before.
I think this is a really important insight / lens to be able to adopt. I’ve adopted this perspective myself, and have found it quite useful.
From reading your linked comment, I think we agree about selection arguments. In the post, when I mention “selection pressure towards a model”, I generally mean “such a model would score highly on the reward metric” as opposed to “SGD is likely to reach such a model”. I believe the former is correct and the later is very much an open-question.
To second what I think is your general point, a lot of the language used around selection can be confusing because it confounds “such a solution would do well under some metric” with “your optimization process is likely to produce such a solution”. The wolves with snipers example illustrates this pretty clearly. I’m definitely open to ideas for better language to distinguish the two cases!
I think this is a really important insight / lens to be able to adopt. I’ve adopted this perspective myself, and have found it quite useful.
My main disagreement/worry with this post is that I think selection arguments are powerful but often (usually?) prove too much, and require significant care.
Thanks for the feedback!
From reading your linked comment, I think we agree about selection arguments. In the post, when I mention “selection pressure towards a model”, I generally mean “such a model would score highly on the reward metric” as opposed to “SGD is likely to reach such a model”. I believe the former is correct and the later is very much an open-question.
To second what I think is your general point, a lot of the language used around selection can be confusing because it confounds “such a solution would do well under some metric” with “your optimization process is likely to produce such a solution”. The wolves with snipers example illustrates this pretty clearly. I’m definitely open to ideas for better language to distinguish the two cases!