This section doesn’t prove that scheming is impossible, it just dismantles a common support for the claim.
It’s worth noting that this exact counting argument (counting functions), isn’t an argument that people typically associated with counting arguments (e.g. Evan) endorse as what they were trying to argue about.[1]
(Sorry for the large number of links. Note that these links don’t present independent evidence and thus the quantity of links shouldn’t be updated upon: the conversation is just very diffuse.)
Or course, it could be that counting in function space is a common misinterpretation. Or more egregiously, people could be doing post-hoc rationalization even though they were defacto reasoning about the situation using counting in function space.
Question: When you say model space, you mean the functional behavior as opposed to the literal parameter space?
Evan: So there’s not quite a one to one mapping because there are multiple implementations of the exact same function in a network. But it’s pretty close. I mean, most of the time when I’m saying model space, I’m talking either about the weight space or about the function space where I’m interpreting the function over all inputs, not just the training data.
I only talk about the space of functions restricted to their training performance for this path dependence concept, where we get this view where, well, they end up on the same point, but we want to know how much we need to know about how they got there to understand how they generalize.
I really appreciate the call-out where modern RL for AI does not equal reward-seeking (though I also appreciate @tailcalled ’s reminder that historical RL did involve reward during deployment); this point has been made before, but not so thoroughly or clearly.
A framing that feels alive for me is that AlphaGo didn’t significantly innovate in the goal-directed search (applying MCTS was clever, but not new) but did innovate in both data generation (use search to generate training data, which improves the search) and offline-RL.
It’s worth noting that this exact counting argument (counting functions), isn’t an argument that people typically associated with counting arguments (e.g. Evan) endorse as what they were trying to argue about.[1]
See also here, here, here, and here.
(Sorry for the large number of links. Note that these links don’t present independent evidence and thus the quantity of links shouldn’t be updated upon: the conversation is just very diffuse.)
Or course, it could be that counting in function space is a common misinterpretation. Or more egregiously, people could be doing post-hoc rationalization even though they were defacto reasoning about the situation using counting in function space.
To add, here’s an excerpt from the Q&A on How likely is deceptive alignment? :
I really appreciate the call-out where modern RL for AI does not equal reward-seeking (though I also appreciate @tailcalled ’s reminder that historical RL did involve reward during deployment); this point has been made before, but not so thoroughly or clearly.
A framing that feels alive for me is that AlphaGo didn’t significantly innovate in the goal-directed search (applying MCTS was clever, but not new) but did innovate in both data generation (use search to generate training data, which improves the search) and offline-RL.