My point here is that I think imitative amplification (if you believe it’s competitive) is a counter-example to Richard’s argument in his “Myopic training doesn’t prevent manipulation of supervisors” section since any manipulative actions that an imitative amplification model takes aren’t judged by their consequences but rather just by how closely they match up with what the overseer would do.
“While the overseer might very well try to determine how effective it’s own actions will be at achieving long-term goals, it never evaluates how effective the model’s actions will be.”
Evan, do you agree that for the model to imitate the actions of the supervisor, it would be useful to mimic some of the thought processes the supervisor uses when generating those actions?
In other words, if HCH is pursuing goal X, what feature of myopic training selects for a model that is internally thinking “I’m going to try to be as close to HCH as possible in this timestep, which involves reasoning about how HCH would pursue X”, versus a model that’s thinking “I’m going to pursue goal X”? (To the extent these are different, which I’m still confused about).
I don’t really understand what you’re saying here. A thing you might be saying:
If that is what you’re saying, I don’t see why this is relevant to whether or not we should use myopic training?
(It’s possible I need to reread the counterfactual oracle analogy, though I did skim it right now and didn’t immediately see the relevance.)
My point here is that I think imitative amplification (if you believe it’s competitive) is a counter-example to Richard’s argument in his “Myopic training doesn’t prevent manipulation of supervisors” section since any manipulative actions that an imitative amplification model takes aren’t judged by their consequences but rather just by how closely they match up with what the overseer would do.
That seems to be a property of myopic cognition rather than myopic training? (See also this comment.)
I’m also confused.
“While the overseer might very well try to determine how effective it’s own actions will be at achieving long-term goals, it never evaluates how effective the model’s actions will be.”
Evan, do you agree that for the model to imitate the actions of the supervisor, it would be useful to mimic some of the thought processes the supervisor uses when generating those actions?
In other words, if HCH is pursuing goal X, what feature of myopic training selects for a model that is internally thinking “I’m going to try to be as close to HCH as possible in this timestep, which involves reasoning about how HCH would pursue X”, versus a model that’s thinking “I’m going to pursue goal X”? (To the extent these are different, which I’m still confused about).