PhD Student at Umass Amherst
Oliver Daniels
The builder-breaker thing isn’t unique to CoT though right? My gloss on the recent Obfuscated Activations paper is something like “activation engineering is not robust to arbitrary adversarial optimization, and only somewhat robust to contained adversarial optimization”.
thanks for the detailed (non-ML) example! exactly the kind of thing I’m trying to get at
Thanks! huh yeah the python interactive windows seems like a much cleaner approach, I’ll give it a try
thanks! yup curser is notebook compatible
Write Good Enough Code, Quickly
Thanks!
I wish there was a bibTeX functionality for alignment forum posts...
Concrete Methods for Heuristic Estimation on Neural Networks
I’m curious if Redwood would be willing to share a kind of “after action report” for why they stopped working on ELK/heuristic argument inspired stuff (e.g Causal Scrubbing, Patch Patching, Generalized Wick Decompositions, Measurement Tampering)
My impression it is some mix of:a. Control seems great
b. Heuristic arguments is a bad bet (for some of the reasons mech interp is a bad bet)
c. ARC has it covered
But the weighting is pretty important here. If its
a. more people should be working on heuristic argument inspired stuff.b. less people should be working on heuristic argument inspired stuff (i.e. ARC employees should quit, or at least people shouldn’t take jobs at ARC)
c. people should try to work at ARC if they’re interested, but its going to be difficult to make progress, especially for e.g. a typical ML PhD student interested in safety.
Ultimately people should come to their own conclusions, but Redwood’s considerations would be pretty valuable information.
(The community often calls this “scalable oversight”, but we want to be clear that this does not necessarily include scaling to large numbers of situations, as in monitoring.)
I like this terminology and think the community should adopt it
Just to make it explicit and check my understanding—the residual decomposition is equivalent to edge / factorized view of the transformer in that we can express any term in the residual decomposition as a set of edges that form a path from input to output, e.g
= input → output
= input-> Attn 1.0 → MLP 2 → Attn 4.3 → output
And it follows that the (pre final layernorm) output of a transformer is the sum of all the “paths” from input to output constructed from the factorized DAG.
For anyone trying to replicate / try new methods, I posted a diamonds “pure prediction model” to huggingface https://huggingface.co/oliverdk/codegen-350M-mono-measurement_pred, (github repo here: https://github.com/oliveradk/measurement-pred/tree/master)
just read “Situational Awareness”—it definitely woke me up. AGI is real, and very plausibly (55%?) happening within this decade. I need to stop sleep walking and get serious about contributing within the next two years.
First, some initial thoughts on the essay
Very “epic” and (self?) aggrandizing. If you believe the conclusions, its not unwarranted, but I worry a bit about narratives that satiate some sense of meaning and self-importance. (That counter-reaction is probably much stronger though, and on the margin it seems really valuable to “full-throatily” take on the prospect of AGI within the next 3-5 years)
I think most of my uncertainty lies in the “unhobbling” type algorithmic progress, this seems especially unpredictable, and may require lots of expensive experimentation if e.g. the relevant capabilities to get some meta-cognitive process to train only emerge at a certain scale. I’m vaguely thinking back to Paul’s post on self-driving cars and AGI timelines. Maybe this is all priced in though—there’s way more research investment, and tech path seems relatively straight forward if we can apply enough experimentation. Still, research is hard, takes a lot of serial time, and is less predictable that e.g. industrial processes. (I’m kind just saying this though, not actually sure how to quantify this, I’m pretty sure people have analsysis of insight generation or whatever, idk...)
I previously thought the argument for measurement tampering being more tractable then general ELK was mostly about the structural / causal properties of multiple independent measurements, but I think I’m more swayed by the argument that measurement tampering will just be more obvious (both easier to see using interpretability and more anomalous in general) then e.g. sycophancy. This is a flimsier argument though, and is less likely to hold when tampering is more subtle.
Here’s a revised sketch
A few notes:I use Scalable Oversight to refer to both Alignment and Control
I’m confused whether weak to strong learning is a restatement of scalable oversight, ELK, or its own thing, so I ignore it
I don’t explicitly include easy-to-hard, I think OOD basically covers it
taxonomies and abstractions are brittle and can be counterproductive
Scalable Oversight Taxonomy
Scalable Oversight
Scalable Alignment
Benchmarks / Tasks
Sandwiching Experiments (human amateurs + model, gt from human experts)
Weak models supervising Strong models
Approaches
Debate
Recursive reward modeling
(Solution to Eliciting Latent Knowledge) + Narrow Elicitation
(Note—I think assumes more then prior scalable oversight ideas that there will be base model with adequate knowledge, such that the hard part is extracting the knowledge rather than teaching the model)
Eliciting Latent Knowledge
Approaches
Contrast Consistent Search
Confidence
Intermediate Probing
“Speed Prior”
“Simplicity Prior”
Concept Extrapolation—learn all salient generalizations, use expensive supervision to select correct one
IID Mechanistic Anomaly Detection + expensive supervision on anomalies
Subclasses
Measurement Tampering Detection
Approaches
OOD Mechanistic Anomaly Detection
In distribution
Out of Distribution (likely? requires multiple measurment structure)
Concept Extrapolation
train diverse probes on untrusted data, select probe that predicts positive measurements less frequently
Narrow Elicitation
...
Scalable Control
Weak Review
Untrusted Rephrase or whatever
Coup probes
MAD (Review all anomalies)
Trojans
...
MAD (maybe?)
Adversarial Examples
...
MAD (maybe?)
Natural Mechanism Distinction
MAD
Spurious Correlate Detection / Resolution
Concept Extrapolation
I think I’m mostly right, but using a somewhat confused frame.
It makes more sense to think of MAD approaches as detecting all abnormal reasons (including deceptive alignment) by default, and then if we get that working we’ll try to decrease false anomalies by doing something like comparing the least common ancestor of the measurements in a novel mechanism to the least common ancestor of the measurements on trusted mechanisms.
One confusion I have with MAD as an approach to ELK is that it seems to assume some kind of initial inner alignment. If we’re flagging when the model takes actions / makes predictions for “unusual reasons”, where unusual is define with respect to some trusted set, but aligned and misaligned models are behaviorally indistinguishable on the trusted set, then a model could learn to do things for misaligned reasons on the trusted set, and then use those same reasons on the untrusted set. For example, a deceptively aligned model would appear aligned in training but attempt take-over in deployment for the “same reason” (e.g. to maximize paperclips), but a MAD approach that “properly” handles out of distribution cases would not flag take over attempts because we want models to be able to respond to novel situations.
I guess this is part of what motivates measurement tampering as a subclass of ELK—instead of trying to track motivations of the agent as reasons, we try to track the reasons for the measurement predictions, and we have some trusted set with no tampering, where we know the reasons for the measurements is ~exactly that the thing we want to be measuring.
Now time to check my answer by rereading https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk
Clarifying the relationship between mechanistic anomaly detection (MAD), measurement tampering detection (MTD), weak to strong generalization (W2SG), weak to strong learning (W2SL), and eliciting latent knowledge (ELK). (Nothing new or interesting here, I just often loose track of these relationships in my head)
eliciting latent knowledge is an approach to scalable oversight which hopes to use the latent knowledge of a model as a supervision signal or oracle.
weak to strong learning is an experimental setup for evaluating scalable oversight protocols, and is a class of sandwiching experiments
weak to strong generalization is a class of approaches to ELK which relies on generalizing a “weak” supervision signal to more difficult domains using the inductive biases and internal structure of the strong model.
measurement tampering detection is a class of weak to strong generalization problems, where the “weak” supervision consists of multiple measurements which are sufficient for supervision in the absence of “tampering” (where tampering is not yet formally defined)
mechanistic anomaly detection is an approach to ELK, where examples are flagged as anomalous if they cause the model to do things for “different reasons” then on a trusted dataset, where “different reasons” are defined w.r.t internal model cognition and structure.
mechanistic anomaly detection methods that work for ELK should also probably work for other problems (such as backdoor detection and adversarial example detection)
so when developing benchmarks for mechanistic anomaly detection, we both want to test methods against methods in standard machine learning security problems (adversarial examples and trojans) that have similar structure to scalable oversight problems, against other elk approaches (e.g. CCS), and against other scalable oversight approaches (e.g. debate)
oh I see, by all(sensor_preds) I meant sum([logit_i] for i in n_sensors) (the probability that all sensors are activated). Makes sense, thanks!
just read both posts and they’re great (as is The Witness). It’s funny though, part of me wants to defend OOP—I do think there’s something to finding really good abstractions (even preemptively), but that its typically not worth it for self-contained projects with small teams and fixed time horizons (e.g. ML research projects, but also maybe indie games).