PhD Student at Umass Amherst
Oliver Daniels
Obvious / informal connection between SLT and information bottleneck theory:
SLT says: more degeneracies → better generalization
IB says: less mutual information b/w input and representation → better generalization
the more degeneracies a function has, the less information it can preserve.
overall how enthusiastic are you about safety motivated people developing such an architecture?
(seems to come with obviously large capability externalities—we can deploy the model outside the sandbox!)
seems like restricting the search to plausible inputs (as judged by e.g. perplexity) might overcome some of these concerns
Take: Exploration hacking should not be used as a synonym for deceptive alignment.
(I have observed one such usage)
Deceptive alignment is maybe a very particular kind of exploration hacking, but the term exploration hacking (without further specification) should refer to models deliberately sandbagging “intermediate” capabilities during RL training to avoid learning a “full” capability.
“If I quantize my network with however many bits, how bad is that?” I don’t know, maybe this is one of these things where if I sat down and tried to do it, I’d realize the issue, but it seems doable to me. It seems like there’s possibly something here.
I think the reason this doesn’t work (i.e. why you can only get a Pareto frontier) is that you can only lower bound the description length of the network / components, such that a direct comparison to “loss bits” doesn’t make sense
Curious for your takes on the difficulty (for models) of sandbagging vs the difficulty (for AI developers) of preventing adversarial examples / jailbreaks. My quick thoughts:
Adversarial robustness is hard, therefore sandbagging is hard (and thus easy to defeat)
Adversarial robustness is improving, there we should expect sandbagging abilities to improve
Even if sandbagging is equally as difficult as adversarial robustness, we might not look for it absent strong warning signs
Even if sandbagging is equally difficult and we do look for it, finding “adversarial” inputs (that cause the model to reveal the capability) may be too expensive. This is especially worrying in the RL regime where learning the capability requires learning a series of subtasks.
This is really cool work and I hope it gets more attention (e.g. promoted to the Alignment Forum). In particular I love the use of CodeNames as a relatively low-compute test-bed for scalable oversight work.
A view thoughts:There are roughly 2 motivations one can have for adversarial evals of scalable oversight methods:
1) identify and patch reward-hacking opportunities,
2) evaluate whether methods detect and eliminate deliberate sandbagging.this work is oriented toward 1 (which is totally valid). Orienting toward towards 2 would motivate subtle changes in the overall setup, e.g. the reward function might be something like minimizing the reference score subject to some lower bound on the overseer score: with set such that when you remove the adversarial reward, normal training does not restore normal performance. This setup would produce predictably different results e.g. we would expect adversarial consultant to exploit the lazy judge.
Asymmetries in the reward structure avoid spurious critiques
a common with critiques / debate is that the critique can raise compelling objections to true arguments that are hard to refute (this is basically obfuscated arguments, I think). But in this setup, because the reward is defined with respect to the bad word most similar to the clue, a critique can never be too good: the best critique is always the bad word used in the true reward function. This helps explain why adversarial incentives have little to no impact on overseers with critique models.
Future work might address this by having a set of “neutral” mixed with the bad words, such that a negligent overseer with a fixed budget for revealing which words are neutral and bad could be manipulated to be overly pessimistic.
I think the smart Silver-sympathetic response here would be:
1. Yes, the alignment problem is a real thing, and my (Silver’s) sketch of a solution is not sufficient
2. Studying / taking inspiration from human-reward circuitry is an interesting research direction
3. We will be able to iterate on the problem before AI’s are capable of taking existentially destructive actions
From (this version of) his perspective, alignment is “normal” problem and a small-piece of achieving beneficial AGI. The larger piece is the AGI part, which is why he spends most of his time/energy/paper devoted to it.
Obviously much ink has been spilled on why alignment might not be a normal problem (feedback loops break when you have agents taking actions which humans can’t evaluate, etc. etc.) but this is the central crux.
Map on “coherence over long time horizon” / agency. I suspect contamination is much worse for chess (so many direct examples of board configurations and moves)
I’ve been confused what people are talking about when they say “trend lines indicate AGI by 2027”—seems like it’s basically this?
also rhymes with/is related to ARC’s work on presumption of independence applied to neural networks (e.g. we might want to make “arguments” that explain the otherwise extremely “surprising” fact that a neural net has the weights it does)
Instead I would describe the problem as arising from a generator and verifier mismatch: when the generator is much stronger than the verifier, the verifier is incentivized to fool the verifier without completing the task successfully.
I think these are related but separate problems—even with a perfect verifier (on easy domains), scheming could still arise.
Though imperfect verifiers increase P(scheming), better verifiers increase the domain of “easy” tasks, etc.
Huh? No, control evaluations involve replacing untrusted models with adversarial models at inference time, whereas scalable oversight attempts to make trusted models at training time, so it would completely ignore the proposed mechanism to take the models produced by scalable oversight and replace them with adversarial models. (You could start out with adversarial models and see whether scalable oversight fixes them, but that’s an alignment stress test.)
I have a similar confusion (see my comment here) but seems like at least Ryan wants control evaluations to cover this case? (perhaps on the assumption that if your “control measures” are successful, they should be able to elicit aligned behavior from scheming models and this behavior can be reinforced?)
I think the concern is less “am I making intellectual progress on some project” and more “is the project real / valuable”
IMO most exciting mech-interp research since SAEs, great work.
A few thoughts / questions:curious about your optimism regarding learned masks as attribution method—seems like the problem of learning mechanisms that don’t correspond to model mechanisms is real for circuits (see Interp Bench) and would plausibly bite here too (through should be able to resolve this with benchmarks on downstream tasks once APD is more mature)
relatedly, the hidden layer “minimality” loss is really cool, excited to see how useful this is in mitigating the problem above (and diagnosing it)
have you tried “ablating” weights by sampling from the initialization distribution rather than zero ablating? haven’t thought about it too much and don’t have a clear argument for why it would improve anything, but it feels more akin to resample ablation and “presumption of independence” type stuff
seems great for mechanistic anomaly detection! very intuitive to map APD to surprise accounting (I was vaguely trying to get at a method like APD here)
makes sense—I think I had in mind something like “estimate P(scheming | high reward) by evaluating P(high reward | scheming)”. But evaluating P(high reward | scheming) is a black-box conservative control evaluation—the possible updates to P(scheming) are just a nice byproduct
We initially called these experiments “control evaluations”; my guess is that it was a mistake to use this terminology, and we’ll probably start calling them “black-box plan-conservative control evaluations” or something.
also seems worth disambiguating “conservative evaluations” from “control evaluations”—in particular, as you suggest, we might want to assess scalable oversight methods under conservative assumptions (to be fair, the distinction isn’t all that clean—your oversight process can both train and monitor a policy. Still, I associate control more closely with monitoring, and conservative evaluations have a broader scope).
Thanks for the thorough response, and apologies for missing the case study!
I think I regret / was wrong about my initial vaguely negative reaction—scaling SAE circuit discovery to large models is a notable achievement!
Re residual skip SAEs: I’m basically on board with “only use residual stream SAEs”, but skipping layers still feels unprincipled. Like imagine if you only trained an SAE on the final layer of the model. By including all the features, you could perfectly recover the model behavior up to the SAE reconstruction loss, but you would have ~no insight into how the model computed the final layer features. More generally, by skipping layers, you risk missing potentially important intermediate features. ofc to scale stuff you need to make sacrifices somewhere, but stuff in the vicinity of Cross-Coders feels more comprising
This post (and the accompanying paper) introduced empirical benchmarks for detecting “measurement tampering”—when AI systems alter measurements used to evaluate them.
Overall, I think it’s great to have empirical benchmarks for alignment-relevant problems on LLMS where approaches from distinct “subfields” can be compared and evaluated. The post and paper do a good job of describing and motivating measurement tampering, justifying the various design decisions (though some of the tasks are especially convoluted).
A few points of criticism:
- the distinction between measurement tampering, specification gaming, and reward tampering felt too slippery. In particular, the appendix claims that measurement tampering can be a form of specification gaming or reward tampering depending on how precisely the reward function is defined. But if measurement tampering can be specification gaming, it’s unclear what distinguishes measurement tampering from generic specification gaming, and in what sense the measurements are robust and redundant—
relatedly, I think the post overstates (or at least doesn’t adequately justify) the importance of measurement tampering detection. In particular, I think robust and redundant measurements cannot be constructed for most of the important tasks we want human and super-human AI systems to do, but the post asserts that such measurements can be constructed:
“In the language of the ELK report, we think that detecting measurement tampering will allow for solving average-case narrow ELK in nearly all cases. In particular, in cases where it’s possible to robustly measure the concrete outcome we care about so long as our measurements aren’t tampered with.”
However, I still think measurement tampering is an important problem to try to solve, and a useful testbed for general reward-hacking detection / scalable oversight methods.
- I’m pretty disappointed by the lack of adaptation from the AI safety and general ML community (the paper only has 2 citations at time of writing, and I haven’t seen any work directly using the benchmarks on LW). Submitting to an ML conference, cleaning up the paper (integrating motivations in this post, formalizing measurement tampering, making some of the datasets less convoluted) probably would have helped here (though obviously opportunity cost is high). Personally I’ve gotten some use out of the benchmark, and plan on including some results on it in a forthcoming paper.
maybe research fads are good?
disclaimer: contrarian take that I don’t actually believe, but marginally updates me in favor of fads
Byrne Hobart has this thesis of “bubbles as coordination mechanisms” (*disclaimer, have not read the book).
If true, this should make us less sad about research fads that don’t fully deliver (e.g. SAEs) - the hype encourages people to build out infrastructure they otherwise wouldn’t that ends up being useful for other things (e.g. auto-interp, activation caching utils)
So maybe the take is “overly optimistic visions are pragmatically useful”, but be aware of operating under overly optimistic visions, and let this awareness subtly guide prioritization.
Note this also applies to conceptual research—I’m pretty skeptical that “formalizing natural abstractions” will directly lead to novel interpretability tools, but the general vibe of natural abstractions has helped my thinking about generalization.