Yeah, I read the ADS paper(s) after writing this post. I think it’s a useful framing, more ‘selection theorem’ ey and with less emphasis on deliberateness/purposefulness.
Additionally, I think there is another conceptual distinction worth attending to
auto-induced distributional shift is about affecting environment to change inputs
the system itself might remain unchanging and undergo no further learning, and still qualify
gradient hacking is about changing environment/inputs/observations to change updates (gradients)
the system is presumed subject to updates, which it is taking (some amount of deliberate) influence over
In this post I wrote
I’m looking for deliberate behaviours which are intended to affect the fitness landscape of the outer process.
which I think rules out (hopefully!) contemporary recommender systems on the above two distinctions (as you gestured to regarding mesa-optimization).
In practice, for a system subject to online outer training, ADS changes the inputs which changes the training distribution, in fact causing some change in the updates to the system (perhaps even a large change!). But ADS per se doesn’t imply these effects are deliberate, though again you might be able to say something selection-theorem-ey about this process if iterated. Indeed, a competent and deliberate gradient hacker might use means of ADS quite effectively.
None of this is to say that ADS is not a concern, I just think it’s conceptually somewhat distinct!
In short, I think ADS available as a mechanism to the extent that the responses of a system can affect subsequent inputs to the system (technically this is always, but in practice the degree of effect varies enormously). This need not be a system subject to further training updates, though if it is, depending how those updates are generated, ADS behaviour may or may not be reinforced.
Gradient hacking was originally coined to mean deliberate, situationally aware influence over training updates. (ADS is one mechanism by which this could be achieved.)
The term ‘gradient hacking’ seems to also be used commonly to refer to any kind of system influence over training updates, whether situationally aware/deliberate or no. I think it’s helpful to distinguish these so I often say ‘deliberate gradient hacking’ to make sure.
Yeah, I read the ADS paper(s) after writing this post. I think it’s a useful framing, more ‘selection theorem’ ey and with less emphasis on deliberateness/purposefulness.
Additionally, I think there is another conceptual distinction worth attending to
auto-induced distributional shift is about affecting environment to change inputs
the system itself might remain unchanging and undergo no further learning, and still qualify
gradient hacking is about changing environment/inputs/observations to change updates (gradients)
the system is presumed subject to updates, which it is taking (some amount of deliberate) influence over
In this post I wrote
which I think rules out (hopefully!) contemporary recommender systems on the above two distinctions (as you gestured to regarding mesa-optimization).
In practice, for a system subject to online outer training, ADS changes the inputs which changes the training distribution, in fact causing some change in the updates to the system (perhaps even a large change!). But ADS per se doesn’t imply these effects are deliberate, though again you might be able to say something selection-theorem-ey about this process if iterated. Indeed, a competent and deliberate gradient hacker might use means of ADS quite effectively.
None of this is to say that ADS is not a concern, I just think it’s conceptually somewhat distinct!
In short, I think ADS available as a mechanism to the extent that the responses of a system can affect subsequent inputs to the system (technically this is always, but in practice the degree of effect varies enormously). This need not be a system subject to further training updates, though if it is, depending how those updates are generated, ADS behaviour may or may not be reinforced.
Gradient hacking was originally coined to mean deliberate, situationally aware influence over training updates. (ADS is one mechanism by which this could be achieved.)
The term ‘gradient hacking’ seems to also be used commonly to refer to any kind of system influence over training updates, whether situationally aware/deliberate or no. I think it’s helpful to distinguish these so I often say ‘deliberate gradient hacking’ to make sure.