… Why isn’t this compatible with saying that the supervisor (HCH) is “able to accurately predict how well their actions fulfil long-term goals”? Like, HCH presumably takes those actions because it thinks those actions are good for long-term goals.
In the imitative case, the overseer never makes a determination about how effective the model’s actions will be at achieving anything. Rather, the overseer is only trying to produce the best answer for itself, and the loss is determined via a distance metric. While the overseer might very well try to determine how effective it’s own actions will be at achieving long-term goals, it never evaluates how effective the model’s actions will be. I see this sort of trick as the heart of what makes the counterfactual oracle analogy work.
My point here is that I think imitative amplification (if you believe it’s competitive) is a counter-example to Richard’s argument in his “Myopic training doesn’t prevent manipulation of supervisors” section since any manipulative actions that an imitative amplification model takes aren’t judged by their consequences but rather just by how closely they match up with what the overseer would do.
“While the overseer might very well try to determine how effective it’s own actions will be at achieving long-term goals, it never evaluates how effective the model’s actions will be.”
Evan, do you agree that for the model to imitate the actions of the supervisor, it would be useful to mimic some of the thought processes the supervisor uses when generating those actions?
In other words, if HCH is pursuing goal X, what feature of myopic training selects for a model that is internally thinking “I’m going to try to be as close to HCH as possible in this timestep, which involves reasoning about how HCH would pursue X”, versus a model that’s thinking “I’m going to pursue goal X”? (To the extent these are different, which I’m still confused about).
In the imitative case, the overseer never makes a determination about how effective the model’s actions will be at achieving anything. Rather, the overseer is only trying to produce the best answer for itself, and the loss is determined via a distance metric. While the overseer might very well try to determine how effective it’s own actions will be at achieving long-term goals, it never evaluates how effective the model’s actions will be. I see this sort of trick as the heart of what makes the counterfactual oracle analogy work.
I don’t really understand what you’re saying here. A thing you might be saying:
If that is what you’re saying, I don’t see why this is relevant to whether or not we should use myopic training?
(It’s possible I need to reread the counterfactual oracle analogy, though I did skim it right now and didn’t immediately see the relevance.)
My point here is that I think imitative amplification (if you believe it’s competitive) is a counter-example to Richard’s argument in his “Myopic training doesn’t prevent manipulation of supervisors” section since any manipulative actions that an imitative amplification model takes aren’t judged by their consequences but rather just by how closely they match up with what the overseer would do.
That seems to be a property of myopic cognition rather than myopic training? (See also this comment.)
I’m also confused.
“While the overseer might very well try to determine how effective it’s own actions will be at achieving long-term goals, it never evaluates how effective the model’s actions will be.”
Evan, do you agree that for the model to imitate the actions of the supervisor, it would be useful to mimic some of the thought processes the supervisor uses when generating those actions?
In other words, if HCH is pursuing goal X, what feature of myopic training selects for a model that is internally thinking “I’m going to try to be as close to HCH as possible in this timestep, which involves reasoning about how HCH would pursue X”, versus a model that’s thinking “I’m going to pursue goal X”? (To the extent these are different, which I’m still confused about).