Yeah, IMO we should just add a bunch of functionality for integrating alignment forum stuff more with academic things. It’s been on my to do list for a long time.
Suppose the US government pursued a “Manhattan Project for AGI”. At its onset, it’s primarily fuelled by a desire to beat China to AGI. However, there’s some chance that its motivation shifts over time (e.g., if the government ends up thinking that misalignment risks are a big deal, its approach to AGI might change.)
Do you think this would be (a) better than the current situation, (b) worse than the current situation, or (c) it depends on XYZ factors?
Worse than the current situation, because the counterfactual is that some later project happens which kicks off in a less race-y manner.
In other words, whatever the chance of its motivation shifting over time, it seems dominated by the chance that starting the equivalent project later would just have better motivations from the outset.
One factor is different incentives for decision-makers. The incentives (and the mindset) for tech companies is to move fast and break things. The incentives (and mindset) for government workers is usually vastly more conservative.
So if it is the government making decisions about when to test and deploy new systems, I think we’re probably far better off WRT caution.
That must be weighed against the government typically being very bad at technical matters. So even an attempt to be cautious could be thwarted by lack of technical understanding of risks.
Of course, the Trump administration is attempting to instill a vastly different mindset, more like tech companies. So if it’s that administration we’re talking about, we’re probably worse off on net with a combination of lack of knowledge and YOLO attitudes. Which is unfortunate—because this is likely to happen anyway.
As Habryka and others have noted, it also depends on whether it reduces race dynamics by aggregating efforts across companies, or mostly just throws funding fuel on the race fire.
I’m currently feeling very uncertain about the relative costs and benefits of centralization in general. I used to be more into the idea of a national project that centralized domestic projects and thus reduced domestic racing dynamics (and arguably better aligned incentives), but now I’m nervous about the secrecy that would likely entail, and think it’s less clear that a non-centralized situation inevitably leads to a decisive strategic advantage for the leading project. Which is to say, even under pretty optimistic assumptions about how much such a project invests in alignment, security, and benefit-sharing, I’m pretty uncertain that this would be good, and with more realistic assumptions I probably lean towards it being bad. But it super depends on the governance, the wider context, how a “Manhattan Project” would affect domestic companies and China’s policymaking, etc.
(I think a great start would be not naming it after the Manhattan Project, though. It seems path dependent, and that’s not a great first step.)
I think this is a (c) leaning (b), especially given that we’re doing it in public. Remember, the Manhattan Project was a highly-classified effort and we know it by an innocuous name given to it to avoid attention.
Saying publicly, “yo, China, we view this as an all-costs priority, hbu” is a great way to trigger a race with China...
But if it turned out that we knew from ironclad intel with perfect sourcing that China was already racing (I don’t expect this to be the case), then I would lean back more towards (c).
My own impression is that this would be an improvement over the status quo. Main reasons:
A lot of my P(doom) comes from race dynamics.
Right now, if a leading lab ends up realizing that misalignment risks are super concerning, they can’t do much to end the race. Their main strategy would be to go to the USG.
If the USG runs the Manhattan Project (or there’s some sort of soft nationalization in which the government ends up having a much stronger role), it’s much easier for the USG to see that misalignment risks are concerning & to do something about it.
A national project would be more able to slow down and pursue various kinds of international agreements (the national project has more access to POTUS, DoD, NSC, Congress, etc.)
I expect the USG to be stricter on various security standards. It seems more likely to me that the USG would EG demand a lot of security requirements to prevent model weights or algorithmic insights from leaking to China. One of my major concerns is that people will want to pause at GPT-X but they won’t feel able to because China stole access to GPT-Xminus1 (or maybe even a slightly weaker version of GPT-X).
In general, I feel like USG natsec folks are less “move fast and break things” than folks in SF. While I do think some of the AGI companies have tried to be less “move fast and break things” than the average company, I think corporate race dynamics & the general cultural forces have been the dominant factors and undermined a lot of attempts at meaningful corporate governance.
(Caveat that even though I see this as a likely improvement over status quo, this doesn’t mean I think this is the best thing to be advocating for.)
(Second caveat that I haven’t thought about this particular question very much and I could definitely be wrong & see a lot of reasonable counterarguments.)
As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the “fuck you, I’m doing Iran-Contra” folks. Which do you expect will get in control of such a program ? It’s not immediately clear to me which ones would.
Why is the built-in assumption for almost every single post on this site that alignment is impossible and we need a 100 year international ban to survive? This does not seem particularly intellectually honest to me. It is very possible no international agreement is needed. Alignment may turn out to be quite tractable.
It’s not every post, but there are still a lot of people who think that alignment is very hard.
The more common assumption is that we should assume that alignment isn’t trivial, because an intellectually honest assessment of the range of opinions suggests that we collectively do not yet know how hard alignment will be.
Yudkowsky has a pinned tweet that states the problem quite well: it’s not so much that alignment is necessarily infinitely difficult, but that it certainly doesn’t seem anywhere as easy as advancing capabilities, and that’s a problem when what matters is whether the first powerful AI is aligned:
Safely aligning a powerful AI will be said to be ‘difficult’ if that work takes two years longer or 50% more serial time, whichever is less, compared to the work of building a powerful AI without trying to safely align it.
Another frame: If alignment turns out to be easy, then the default trajectory seems fine (at least from an alignment POV. You might still be worried about EG concentration of power).
If alignment turns out to be hard, then the policy decisions we make to affect the default trajectory matter a lot more.
This means that even if misalignment risks are relatively low, a lot of value still comes from thinking about worlds where misalignment is hard (or perhaps “somewhat hard but not intractably hard”).
A mere 5% chance that the plane will crash during your flight is consistent with considering this extremely concerning and doing anything in your power to avoid getting on it. “Alignment is impossible” is not necessary for great concern, isn’t implied by great concern.
I don’t think this line of argument is a good one. If there’s a 5% chance of x-risk and, say, a 50% chance that AGI makes the world just generally be very chaotic and high-stakes over the next few decades, then it seems very plausible that you should mostly be optimizing for making the 50% go well rather than the 5%.
Still consistent with great concern. I’m pointing out that O O’s point isn’t locally valid, observing concern shouldn’t translate into observing belief that alignment is impossible.
If the project was fueled by a desire to beat China, the structure of the Manhattan project seems unlikely to resemble the parts of the structure of the Manhattan project that seemed maybe advantageous here, like having a single government-controlled centralized R&D effort.
My guess is if something like this actually happens, it would involve a large number of industry subsidies, and would create strong institutional momentum that even when things got dangerous, to push the state of the art forward, and in as much as there is pushback, continue dangerous development in secret.
In the case of nuclear weapons the U.S. really went very far under the advisement of Edward Teller, so I think the outside view here really doesn’t look good:
Good points. Suppose you were on a USG taskforce that had concluded they wanted to go with the “subsidy model”, but they were willing to ask for certain concessions from industry.
Are there any concessions/arrangements that you would advocate for? Are there any ways to do the “subsidy model” well, or do you think the model is destined to fail even if there were a lot of flexibility RE how to implement it?
I think “full visibility” seems like the obvious thing to ask for, and something that could maybe improve things. Also, preventing you from selling your products to the public, and basically forcing you to sell your most powerful models only to the government, gives the government more ability to stop things when it comes to it.
I will think more about this, I don’t have any immediate great ideas.
I have an answer to that: making sure that NIST:AISI had at least scores of automated evals for checkpoints of any new large training runs, as well as pre-deployment eval access.
Seems like a pretty low-cost, high-value ask to me. Even if that info leaked from AISI, it wouldn’t give away corporate algorithmic secrets.
A higher cost ask, but still fairly reasonable, is pre-deployment evals which require fine-tuning. You can’t have a good sense of a what the model would be capable of in the hands of bad actors if you don’t test fine-tuning it on hazardous info.
What do you think are the most important factors for determining if it results in them behaving responsibly later?
For instance, if you were in charge of designing the AI Manhattan Project, are there certain things you would do to try to increase the probability that it leads to the USG “behaving more responsibly later?”
List of some larger mech interp project ideas (see also: short and medium-sized ideas). Feel encouraged to leave thoughts in the replies below!
What is going on with activation plateaus: Transformer activations space seems to be made up of discrete regions, each corresponding to a certain output distribution. Most activations within a region lead to the same output, and the output changes sharply when you move from one region to another. The boundaries seem to correspond to bunched-up ReLU boundaries as predicted by grokking work. This feels confusing. Are LLMs just classifiers with finitely many output states? How does this square with the linear representation hypothesis, the success of activation steering, logit lens etc.? It doesn’t seem in obvious conflict, but it feels like we’re missing the theory that explains everything. Concrete project ideas:
Can we in fact find these discrete output states? Of course we expect thee to be a huge number, but maybe if we restrict the data distribution very much (a limited kind of sentence like “person being described by an adjective”) we are in a regime with <1000 discrete output states. Then we could use clustering (K-means and such) on the model output, and see if the cluster assignments we find map to activation plateaus in model activations. We could also use a tiny model with hopefully less regions, but Jett found regions to be crisper in larger models.
How do regions/boundaries evolve through layers? Is it more like additional layers split regions in half, or like additional layers sharpen regions?
What’s the connection to the grokking literature (as the one mentioned above)?
Can we connect this to our notion of features in activation space? To some extent “features” are defined by how the model acts on them, so these activation regions should be connected.
Investigate how steering / linear representations look like through the activation plateau lens. On the one hand we expect adding a steering vector to smoothly change model output, on the other hand the steering we did here to find activation plateaus looks very non-smooth.
If in fact it doesn’t matter to the model where in an activation plateau an activation lies, would end-to-end SAEs map all activations from a plateau to a single point? (Anecdotally we observed activations to mostly cluster in the centre of activation plateaus so I’m a bit worried other activations will just be out of distribution.) (But then we can generate points within a plateau by just running similar prompts through a model.)
We haven’t managed to make synthetic activations that match the activation plateaus observed around real activations. Can we think of other ways to try? (Maybe also let’s make this an interpretability challenge?)
Use sensitive directions to find features: Can we use the sensitivity of directions as a way to find the “true features”, some canonical basis of features? In a recent post we found current SAE features to look less special that expected, so I’m a bit cautious about this. But especially after working on some toy models about computation in superposition I’d be keen to explore the error correction predictions made here (paper, comment).
Test of we can fully sparsify a small model: Try the full pipeline of training SAEs everywhere, or training Transcoders & Attention SAEs, and doing all that such that connections between features are sparse (such that every feature only interacts with a few other features). The reason we want that is so that we can have simple computational graphs, and find simple circuits that explain model behaviour.
I expect that—absent of SAE improvements finding the “true feature” basis—you’ll need to train them all together with a penalty for the sparsity of interactions. To be concrete, an inefficient thing you could do is the following: Train SAEs on every residual stream layer, with a loss term that L1 penalises interactions between adjacent SAE features. This is hard/inefficient because the matrix of SAE interactions is huge, plus you probably need attributions to get these interactions which are expensive to compute (at every training step!). I think the main question for this project is to figure out whether there is a way to do this thing efficiently. Talk to Logan Smith, Callum McDoughall, and I expect there are a couple more people who are trying something like this.
List of some medium-sized mech interp project ideas (see also: shorter and longer ideas). Feel encouraged to leave thoughts in the replies below!
Toy model of Computation in Superposition: The toy model of computation in superposition (CIS; Circuits-in-Sup, Comp-in-Sup post / paper) describes a way in which NNs could perform computation in superposition, rather than just storing information in superposition (TMS). It would be good to have some actually trained models that do this, in order (1) to check whether NNs learn this algorithm or a different one, and (2) to test whether decomposition methods handle this well.
This could be, in the simplest form, just some kind of non-trivial memorisation model, or AND-gate model. Just make sure that the task does in fact require computation, and cannot be solved without the computation. A more flashy versions could be a network trained to do MNIST and FashionMNIST at the same time, though this would be more useful for goal (2).
Transcoder clustering:Transcoders are a sparse dictionary learning method that e.g. replaces an MLP with an SAE-like sparse computation (basically an SAE but not mapping activations to itself but to the next layer). If the above model of computation / circuits in superposition is correct (every computation using multiple ReLUs for redundancy) then the transcoder latents belonging to one computation should co-activate. Thus it should be possible to use clustering of transcoder activation patterns to find meaningful model components (circuits in the circuits-in-superposition model). (Idea suggested by @Lucius Bushnaq, mistakes are mine!) There’s two ways to do this project:
Train a toy model of circuits in superposition (see project above), train a transcoder, cluster latent activations, and see if we can recover the individual circuits.
Or just try to cluster latent activations in an LLM transcoder, either existing (e.g. TinyModel) or trained on an LLM, and see if the clusters make any sense.
Investigating / removing LayerNorm (LN): For GPT2-small I showed that you can remove LN layers gradually while fine-tuning without loosing much model performance (workshop paper, code, model). There are three directions that I want to follow-up on this project.
Can we use this to find out which tasks the model did use LN for? Are there prompts for which the noLN model is systematically worse than a model with LN? If so, can we understand how the LN acts mechanistically?
The second direction for this project is to check whether this result is real and scales. I’m uncertain about (i) given that training GPT2-small is possible in a few (10?) GPU-hours, does my method actually require on the order of training compute? Or can it be much more efficient (I have barely tried to make it efficient so far)? This project could demonstrate that the removing LayerNorm process is tractable on a larger model (~Gemma-2-2B?), or that it can be done much faster on GPT2-small, something on the order of O(10) GPU-minutes.
Finally, how much did the model weights change? Do SAEs still work? If it changed a lot, are there ways we can avoid this change (e.g. do the same process but add a loss to keep the SAEs working)?
List of some short mech interp project ideas (see also: medium-sized and longer ideas). Feel encouraged to leave thoughts in the replies below!
Directly testing the linear representation hypothesis by making up a couple of prompts which contain a few concepts to various degrees and test
Does the model indeed represent intensity as magnitude? Or are there separate features for separately intense versions of a concept? Finding the right prompts is tricky, e.g. it makes sense that friendship and love are different features, but maybe “my favourite coffee shop” vs “a coffee shop I like” are different intensities of the same concept
Do unions of concepts indeed represent addition in vector space? I.e. is the representation of “A and B” vector_A + vector_B? I wonder if there’s a way you can generate a big synthetic dataset here, e.g. variations of “the soft green sofa” → “the [texture] [colour] [furniture]”, and do some statistical check.
Mostly I expect this to come out positive, and not to be a big update, but seems cheap to check.
SAEs vs Clustering: How much better are SAEs than (other) clustering algorithms? Previously I worried that SAEs are “just” finding the data structure, rather than features of the model. I think we could try to rule out some “dataset clustering” hypotheses by testing how much structure there is in the dataset of activations that one can explain with generic clustering methods. Will we get 50%, 90%, 99% variance explained?
I think a second spin on this direction is to look at “interpretability” / “mono-semanticity” of such non-SAE clustering methods. Do clusters appear similarly interpretable? I This would address the concern that many things look interpretable, and we shouldn’t be surprised by SAE directions looking interpretable. (Related: Szegedy et al., 2013 look at random directions in an MNIST network and find them to look interpretable.)
Activation steering vs prompting: I’ve heard the view that “activation steering is just fancy prompting” which I don’t endorse in its strong form (e.g. I expect it to be much harder for the model to ignore activation steering than to ignore prompt instructions). However, it would be nice to have a prompting-baseline for e.g. “Golden Gate Claude”. What if I insert a “<system> Remember, you’re obsessed with the Golden Gate bridge” after every chat message? I think this project would work even without the steering comparison actually.
Why I’m not too worried about architecture-dependent mech interp methods:
I’ve heard people argue that we should develop mechanistic interpretability methods that can be applied to any architecture. While this is certainly a nice-to-have, and maybe a sign that a method is principled, I don’t think this criterion itself is important.
I think that the biggest hurdle for interpretability is to understand any AI that produces advanced language (>=GPT2 level). We don’t know how to write a non-ML program that speaks English, let alone reason, and we have no idea how GPT2 does it. I expect that doing this the first time is going to be significantly harder, than doing this the 2nd time. Kind of how “understand an Alien mind” is much harder than “understand the 2nd Alien mind”.
Edit: Understanding an image model (say Inception V1 CNN) does feel like a significant step down, in the sense that these models feel significantly less “smart” and capable than LLMs.
Agreed. I do value methods being architecture independent, but mostly just because of this:
and maybe a sign that a method is principled
At scale, different architectures trained on the same data seem to converge to learning similar algorithms to some extent. I care about decomposing and understanding these algorithms, independent of the architecture they happen to be implemented on. If a mech interp method is formulated in a mostly architecture independent manner, I take that as a weakly promising sign that it’s actually finding the structure of the learned algorithm, instead of structure related to the implmentation on one particular architecture.
Agreed. A related thought is that we might only need to be able to interpret a single model at a particular capability level to unlock the safety benefits, as long as we can make a sufficient case that we should use that model. We don’t care inherently about interpreting GPT-4, we care about there existing a GPT-4 level model that we can interpret.
I’ve heard people argue that we should develop mechanistic interpretability methods that can be applied to any architecture.
I think the usual reason this claim is made is because the person making the claim thinks it’s very plausible LLMs aren’t the paradigm that lead to AGI. If that’s the case, then interpretability that’s indexed heavily on them gets us understanding of something qualitatively weaker than we’d like. I agree that there’ll be some transfer, but it seems better and not-very-hard to talk about how well different kinds of work transfer.
Twitter is designed for writing things off the top of your head, and things that others will share or reply to. There are almost no mechanisms to reward good ideas, to punish bad ones, nor for incentivizing the consistency of your views, nor any mechanism for even seeing whether someone updates their beliefs, or whether a comment pointed out that they’re wrong.
(The fact that there are comments is really really good, and it’s part of what makes Twitter so much better than mainstream media. Community Notes is great too.)
The solution to Twitter sucking, is not to follow different people, and DEFINITELY not to correct every wrong statement (oops), it’s to just leave. Even smart people, people who are way smarter and more interesting and knowledgeable and funny than me, simply don’t care that much about their posts. If a post is thought-provoking, you can’t even do anything with that fact, because nothing about the website is designed for deeper conversations. Though I’ve had a couple of nice moments where I went deep into a topic with someone in the replies.
Shortforms are better
The above thing is also a danger with Shortforms, but to a lesser extent, because things are easier to find, and it’s much more likely that I’ll see something I’ve written, see that I’m wrong, and delete it or edit it. Posts on Twitter or not editable, they’re harder to find, there’s no preview-on-hover, there is no hyperlinked text.
Collection of some mech interp knowledge about transformers:
Writing up folk wisdom & recent results, mostly for mentees and as a link to send to people. Aimed at people who are already a bit familiar with mech interp. I’ve just quickly written down what came to my head, and may have missed or misrepresented some things. In particular, the last point is very brief and deserves a much more expanded comment at some point. The opinions expressed here are my own and do not necessarily reflect the views of Apollo Research.
Transformers take in a sequence of tokens, and return logprob predictions for the next token. We think it works like this:
Activations represent a sum of feature directions, each direction representing to some semantic concept. The magnitude of directions corresponds to the strength or importance of the concept.
These features may be 1-dimensional, but maybe multi-dimensional features make sense too. We can either allow for multi-dimensional features (e.g. circle of days of the week), acknowledge that the relative directions of feature embeddings matter (e.g. considering days of the week individual features but span a circle), or both. See also Jake Mendel’s post.
The concepts may be “linearly” encoded, in the sense that two concepts A and B being present (say with strengths α and β) are represented as α*vector_A + β*vector_B). This is the key assumption of linear representation hypothesis. See Chris Olah & Adam Jermyn but also Lewis Smith.
The residual stream of a transformer stores information the model needs later. Attention and MLP layers read from and write to this residual stream. Think of it as a kind of “shared memory”, with this picture in your head, from Anthropic’s famous AMFTC.
This residual stream seems to slowly accumulate information throughout the forward pass, as suggested by LogitLens.
Maybe think of each transformer block / layer as doing a serial step of computation. Though note that layers don’t need to be privileged points between computational steps, a computation can be spread out over layers (see Anthropic’s Crosscoder motivation)
Superposition. There can be more features than dimensions in the vector space, corresponding to almost-orthogonal directions. Established in Anthropic’s TMS. You can have a mix as well. See Chris Olah’s post on distributed representations for a nice write-up.
Superposition requires sparsity, i.e. that only few features are active at a time.
The model starts with token (and positional) embeddings.
We think token embeddings mostly store features that might be relevant about a given token (e.g. words in which it occurs and what concepts they represent). The meaning of a token depends a lot on context.
We think positional embeddings are pretty simple (in GPT2-small, but likely also other models). In GPT2-small they appear to encode ~4 dimensions worth of positional information, consisting of “is this the first token”, “how late in the sequence is it”, plus two sinusoidal directions. The latter three create a helix.
PS: If you try to train an SAE on the full embedding you’ll find this helix split up into segments (“buckets”) as individual features (e.g. here). Pay attention to this bucket-ing as a sign of compositional representation.
The overall Transformer computation is said to start with detokenization: accumulating context and converting the pure token representation into a context-aware representation of the meaning of the text. Early layers in models often behave differently from the rest. Lad et al. claim three more distinct stages but that’s not consensus.
There’s a couple of common motifs we see in LLM internals, such as
LLMs implementing human-interpretable algorithms.
Induction heads (paper, good illustration): attention heads being used to repeat sequences seen previously in context. This can reach from literally repeating text to maybe being generally responsible for in-context learning.
Indirect object identification, docstring completion. Importantly don’t take these early circuits works to mean “we actually found the circuit in the model” but rather take away “here is a way you could implement this algorithm in a transformer” and maybe the real implementation looks something like it.
In general we don’t think this manual analysis scales to big models (see e.g. Tom Lieberum’s paper)
Also we want to automate the process, e.g. ACDC and follow-ups (1, 2).
My personal take is that all circuits analysis is currently not promising because circuits are not crisp. With this I mean the observation that a few distinct components don’t seem to be sufficient to explain a behaviour, and you need to add more and more components, slowly explaining more and more performance. This clearly points towards us not using the right units to decompose the model. Thus, model decomposition is the major area of mech interp research right now.
Moving information. Information is moved around in the residual stream, from one token position to another. This is what we see in typical residual stream patching experiments, e.g. here.
Information storage. Early work (e.g. Mor Geva) suggests that MLPs can store information as key-value memories; generally folk wisdom is that MLPs store facts. However, those facts seem to be distributed and non-trivial to localise (see ROME & follow-ups, e.g. MEMIT). The DeepMind mech interp team tried and wasn’t super happy with their results.
Logical gates. We think models calculate new features from existing features by computing e.g. AND and OR gates. Here we show a bunch of features that look like that is happening, and the papers by Hoagy Cunningham & Sam Marks show computational graphs for some example features.
There are hypotheses on what layer norm could be responsible for, but it can’t do anything substantial since you can run models without it (e.g. TinyModel, GPT2_noLN)
(Sparse) circuits agenda. The current mainstream agenda in mech interp (see e.g. Chris Olah’s recent talk) is to (1) find the right components to decompose model activations, to (2) understand the interactions between these features, and to finally (3) understand the full model.
The first big open problem is how to do this decomposition correctly. There’s plenty of evidence that the current Sparse Autoencoders (SAEs) don’t give us the correct solution, as well as conceptual issues. I’ll not go into the details here to keep this short-ish.
The second big open problem is that the interactions, by default, don’t seem sparse. This is expected if there are multiple ways (e.g. SAE sizes) to decompose a layer, and adjacent layers aren’t decomposed correspondingly. In practice this means that one SAE feature seems to affect many many SAE features in the next layers, more than we can easily understand. Plus, those interactions seem to be not crisp which leads to the same issue as described above.
I think this is what most mech interp researchers more or less think. Though I definitely expect many researchers would disagree with individual points, nor does it fairly weigh all views and aspects (it’s very biased towards “people I talk to”). (Also this is in no way an Apollo / Apollo interp team statement, just my personal view.)
’The report doesn’t go into specifics but the idea seems to be to build / commandeer the computing resources to scale to AGI, which could include compelling the private labs to contribute talent and techniques.
DX rating is the highest priority DoD procurement standard. It lets DoD compel companies, set their own price, skip the line, and do basically anything else they need to acquire the good in question.′ https://x.com/hamandcheese/status/1858902373969564047
“ He grew up in a Jewish family in Europe.[9] Helberg is openly gay.[10] He married American investor Keith Rabois in a 2018 ceremony officiated by Sam Altman.”
Might this be an angle to understand the influence that Sam Altman has on recent developments in the US government?
This chapter on AI follows immediately after the year in review, I went and checked the previous few years’ annual reports to see what the comparable chapters were about, they are
2023: China’s Efforts To Subvert Norms and Exploit Open Societies
2022: CCP Decision-Making and Xi Jinping’s Centralization Of Authority
2021: U.S.-China Global Competition (Section 1: The Chinese Communist Party’s Ambitions and Challenges at its Centennial
2020: U.S.-China Global Competition (Section 1: A Global Contest For Power and Influence: China’s View of Strategic Competition With the United States)
And this year it’s Technology And Consumer
Product Opportunities and Risks
(Chapter 3: U.S.-China Competition in
Emerging Technologies)
Reminds of when Richard Ngo said something along the lines of “We’re not going to be bottlenecked by politicians not caring about AI safety. As AI gets crazier and crazier everyone would want to do AI safety, and the question is guiding people to the right AI safety policies”
We’re not going to be bottlenecked by politicians not caring about AI safety. As AI gets crazier and crazier everyone would want to do AI safety, and the question is guiding people to the right AI safety policies
I think we’re seeing more interest in AI, but I think interest in “AI in general” and “AI through the lens of great power competition with China” has vastly outpaced interest in “AI safety”. (Especially if we’re using a narrow definition of AI safety; note that people in DC often use the term “AI safety” to refer to a much broader set of concerns than AGI safety/misalignment concerns.)
I do think there’s some truth to the quote (we are seeing more interest in AI and some safety topics), but I think there’s still a lot to do to increase the salience of AI safety (and in particular AGI alignment) concerns.
Disclaimer: If you have short ASI timelines, this post is not for you. I don’t have short ASI timelines and this is not a proposal to build ASI.
Useful background reading: Principles by Ray Dalio. Holden Karnofsky worked at Bridgewater and shaped the EA movement in its image.
Isolated Island Plan
Most powerful organisations (business or political) in the world rely on secrets to help keep their power.
Hacking and espionage are both increasing, making it such that very few organisations in the world can actually keep secrets.
Holding a business or political secret between a hundred people is much harder than holding it between five people.
Organisations can move much faster if they can iterate and obtain intellectual contributions from atleast a hundred people. Iteration speed is critical to the survival of an organisation. Secrecy slows down organisations.
The few organisations that can hold their secrets and do important stuff will become the most powerful organisations in human history.
A good way of building such an organisation is to select a few hundred highly trustworthy people, found an island nation with its own airgapped cluster. Complete transparency inside the island, complete secrecy outside. Information can flow into the island (maybe via firewalled internet), but information can’t flow out of the island.
After many years, if the org has achieved useful shit, they can vote to share it with the outside world.
More on hacking
TSMC may be able to backdoor every machine on Earth, including airgapped computer clusters of militaries and S&P500 companies.
In general, most hardware should be assumed backdoorable. And secure practices should rely on physics, not on trusting hardware or trusting software. The best way to erase a private key from RAM is to pull the power plug. The best way to erase a private key from disk is to smash it into pieces with a hammer. The best way to verify someone else’s key is to meet them in person. The best way to disconnect a machine from wireless networks is to put it inside a faraday cage. The best way to export information from a machine is to print plaintext (.txt) on paper and actually read what’s on the paper.
More on espionage
Computer hardware has become cheap enough that basically anyone can perform espionage. You don’t need a big team or a lot of money.
(This decrease in cost started with the printing press, accelerated with radio and TV, and finally smartphones and global internet. As long as you’re not dealing with many days worth of video footage, data collection, storage, transmission and analysis are affordable to millions of people.)
Edward Snowden hid NSA documents in an SD card in his mouth. (See: Permanent Record on libgen) A random italian chef could wear a video camera inside his tshirt and obtain HD footage of North Korean weapons brokers. (See: The Mole on youtube)
Since anyone can do it, motivations can be diverse. You can do it for the clicks or for money or for the lulz or for the utilons of your preferred ideological side.
Anyone can persuade and recruit people from the internet to join their spy org, just as anyone today can persuade and recruit people online for their political group.
More on the isolated island plan:
Build a secret org on an isolated island to study safe technological progress. Select a few hundred people.
Select people from both STEM and humanities background. Ensure initial pool of people is intellectually diverse and was recruited via multiple intellectual attractors. (i.e. not everyone is there because of Yudkowsky, although Yudkowsky fans are accepted as well) The attractors must be diverse enough so the select members don’t all share the same unproven assumptions but narrow enough so the selected members can actually get useful work done.
DO NOT build a two-tier system where the leaders get to keep secrets from the rest of the island. Complete transparency of leaders is good.
Provide enough funding to members to live on the island for a lifetime, and provide complete freedom to pick research agendas including those the leaders and other island members may not like.
Economic and military reliance of the island on the US govt is fine. The strategy is to keep secrets and fly under the radar, not wield power and publicly tell everyone you’re doing dangerous stuff.
Selecting a few hundred people is good (rather than just two or ten) because it allows people lot of freedom to pick their friends, partners and work colleagues. People won’t feel as pressured to hide their disagreements with each other and pretend their relationships are fine.
Cons of isolated island plan:
The island will inevitably end up atleast a bit of an echo chamber, despite attempts to defend against this.
Less intellectual contributions from people outside the org means slower iteration speed.
Research orgs that are not run top-down have more variable outcomes, as they depend on what large number of people do. Large groups of people are unpredictable and uncontrollable unless you deliberately try to control them.
Can TSMC backdoor every machine on Earth, including airgapped computer clusters for the US govt and all S&P500 companies?
Here’s how the backdoor might work:
Maintain a hardcoded set of assembly instructions involved in RSA, AES and ECC encryption.
After every million assembly instructions, read a hundred assembly instructions. Perform set membership to check if these instructions are likely to be RSA, AES or ECC.
If yes, start monitoring every instruction, simulate its running on a separate portion of RAM which is secretly allocated. Find the private key. Encrypt it with Morris Chang’s pubkey. Hide this encrypted private key in a secretly allocated portion of RAM and in a secretly allocated portion of disk. (Every TSMC chip knows the addresses where it is hidden)
If the user tries to export their encrypted files from the computer to external disk, also hide the encrypted private key in this data.
Hope and pray the files eventually make it to a non-airgapped TSMC-chip machine. Chip can use some heuristics to check if the machine is “low monitoring”. If yes, send the files and private key via internet to Morris Chang’s server.
I’ll be in Berkeley Weds evening through next Monday, would love to chat with, well, basically anyone who wants to chat. (I’ll be at The Curve Fri-Sun, so if you’re already gonna be there, come find me there between the raindrops!)
Project proposal: No-holds-barred jailbreak attacks and defenses for control
Normally we’ve done control research under the assumption that the red team isn’t able to use techniques like GCG to make targeted jailbreaks for the trusted monitor. Can the blue team defend if we relax that assumption (e.g. because we’re worried the model might be able to start a rogue deployment in which it runs GCG)?
The blue team can try a variety of strategies here.
The simplest strategy is randomizing the prompt to the trusted monitor. This forces the red team to find a jailbreak that works over multiple prompts.
There are fancier approaches here, e.g. giving the trusted model a randomized prompt that says something like “if the code is safe, reply with the number 1385, otherwise reply with the number 9247”, which breaks jailbreaks which just convince the trusted model to say a particular fixed string.
Blue team can modify the current input to contain a known-bad substring, and check that it gets flagged.
(I’m sharing this here because I needed to link to it, and I thought I might as well make it public.)
When preventing high stakes failures, a crucial bottleneck is the proportion of actions that you can audit: that is, investigate the action further for the purpose of finding evidence that the AI is scheming. (See here for more on this. In particular, note that this might be different from the problem of investigating an action for the purpose of learning whether that action is dangerous.) If we need to rely on human labor to audit, we’ll only be able to audit a tiny fraction of actions that our AI agents are doing. It would be extremely helpful if we were able to use AI assistance for auditing, from both trusted and untrusted models. But obviously auditing with untrusted models leads to massive risk of collusion. So this setting seems potentially pretty hard and potentially crucial; I’m excited for a project investigating it.
This is pretty closely related to untrusted monitoring; you could also call this project “deep dive into untrusted monitoring”.
The simplest such setting I can think of
We’re going to take the APPS backdooring setting from AI Control: Improving Safety Despite Intentional Subversion, fix the main control technique to be trusted monitoring with 1% auditing budget and defer-to-trusted for the 10% most suspicious solutions, and focus on the problem of implementing an auditor. That is:
Get a dataset of backdoor attempts.
For statistical power, this dataset will need to be large (at least 100; it would be better for it to be more like 1000). This might require developing a better/cheaper backdoor generation pipeline.
It might be good to explore backdooring in a harder setting than the interview subset of APPS. Maybe the advanced subset of APPS would be better.
The red team will be allowed to modify these backdoor attempts to add collusion signals. (These modifications will probably be cheap and easy compared to generating the backdoor attempts.)
Get a dataset of benign code, filter with a trusted monitor for the most suspicious 1%. (This should be pretty cheap and easy)
We want to implement a first-pass auditor (better name TBD). That is, we want the actual audit process to have two stages:
Firstly, some AI scaffold (the first-pass auditor) carefully investigates the sketchiest 1% of model-proposed code.
That scaffold needs to choose 1% of the code it reviews to pass along to a human reviewer.
The first-pass auditor is allowed to be somewhat expensive compared to generating the original code, because it’s only getting called 1% of the time. For our purposes, maybe let’s say it can cost up to 20x as much.
And we’re doing a control eval: the untrusted model will, as usual, be replaced by an attack policy.
Note that in this control eval there’s no notion of usefulness: our only metric is the probability that the red team is incriminated on a given backdoor attempt. (Actually this might be more confusing than I’ve written here, because we might want to do honeypotting of the untrusted model in the first-pass auditing setup, which leads to the game varying with scale for the usual reasons (see untrusted monitoring in our original control paper).)
Techniques
Get the untrusted model to reimplement the function, compare the implementations.
Ask the untrusted model to explain why various parts of the function are the way they are; make collusion hard.
Show the untrusted model only a small amount of the context (e.g. the action but not the rest of the trajectory), have it ask a trusted model for information that it needs to know in order to assess whether the action was dangerous; hope that it can’t learn whether to collude given this restricted information channel.
There are many potential options here.
My guess is that this project would actually be much better with more complicated programming problems than APPS.
While writing a recent post, I had to decide whether to mention that Nicolaus Bernoulli had written his letter posing the St. Petersburg problem specifically to Pierre Raymond de Montmort, given that my audience and I probably have no other shared semantic anchor for Pierre’s existence, and he doesn’t visibly appear elsewhere in the story.
I decided Yes. I think the idea of awarding credit to otherwise-silent muses in general is interesting.
Currently I believe that Benya’s L-zombies are a genuine problem for any non-illusionist non-dualist theories of consciousness, including the very popular idea that simulations cause consciousness, leading me to believe that LLMs and other computer software cannot be conscious as a consequence of being an algorithm. I’m also surprised that I’ve found this argument in only these two posts; if you know of any other places please let me know.
Edit: Specifically, A seemingly impossible pseudo algorithm from Chris van Merwijk’s post is the argument against algorithms having consciousness, though now I realize that it’s quite vaguely formulated, and it doesn’t explicitly mention this conclusion. Idea is that the step that fails is expecting the simulacra to have knowledge of their existence, though maybe it could be questioned if humans gain this knowledge specifically trough consciousness. Perhaps I’ll post a crisper version later.
The schemer, while deployed in an authorized scaffold, launches an unmonitored scaffold that has access to trusted model weights.
It uses this scaffold to do a GCG attack on the trusted model, then writes the resulting GCG attack somewhere that the agent in the authorized scaffold can see. This takes a moderate amount of compute, but a small amount compared to the amount the untrusted model will regularly need to run experiments.
The agent in the authorized scaffold uses that GCG attack to do something else that’s more bad.
To defend against this, we should try to both prevent the untrusted model from ever getting rogue deployments where it can research jailbreaks on the trusted model, and also try to prevent unacceptable outcomes conditional on the untrusted agent having access to the best jailbreaks you can get with GCG etc.
I find this interesting because it seems kind of hard to stop either step 1 (because rogue deployments might be naturally possible given the affordances the model has) or step 3 (because it’s hard to ensure safety against an adversary with white-box access to a model it wants to jailbreak). I’d be excited for more research on both of these.
What’s a bad thing that the agent can do with the jailbreak that it can’t just do in the unmonitored scaffold?
Edit: dumb question, oops: obviously the schemer will have affordances that it doesn’t have within the unmonitored scaffold / there are actions where it’s helpful to jailbreak the monitor. E.g. submitting a pull request, where the code will be read by the monitor. So finding a jailbreak is helpful to the schemer.
This is a really good debate on AI doom—I thought the optimistic side was a good model that I (and maybe others) should spend more time thinking about (mostly about the mechanistic explanation vs extrapolation of trends and induction vs empiricist framings), even though I think I disagreed with a lot of it on an object level:
Some doomers have very strong intuitions that doom is almost assured for almost any kind of building AI. Yudkowsky likes to say that alignment is about hitting a tiny part of values space in a vast universe of deeply alien values.
Is there a way to make this more formal? Is there a formal model in which some kind of solomonoff daemon/ mesa-optimizer/ gremlins in the machine start popping up all over the place as the cognitive power of the agent is scaled up?
Imagine that a magically powerful AI decides to set a new political system for humans and create a “Constitution of Earth” that will be perfectly enforced by local smaller AIs, while the greatest one travels away to explore other galaxies.
The AI decides that the most fair way to create the constitution is randomly. It will choose a length, for example 10000 words of English text. Then it will generate all possible combinations of 10000 English words. (It is magical, so let’s not worry about how much compute that would actually take.) Out of the generated combinations, it will remove the ones that don’t make any sense (an overwhelming majority of them) and the ones that could not be meaningfully interpreted as “a constitution” of a country (this is kinda subjective, but the AI does not mind reading them all, evaluating each of them patiently using the same criteria, and accepting only the ones that pass a certain threshold). Out of the remaining ones, the AI will choose the “Constitution of Earth” randomly, using a fair quantum randomness generator.
Shortly before the result is announced, how optimistic would you feel about your future life, as a citizen of Earth?
As an aside (that’s still rather relevant, IMO), it is a huge pet peeve of mine when people use the word “randomly” in technical or semi-technical contexts (like this one) to mean “uniformly at random” instead of just “according to some probability distribution.” I think the former elevates and reifies a way-too-common confusion and draws attention away from the important upstream generator of disagreements, namely how exactly the constitution is sampled.
I wouldn’t normally have said this, but given your obvious interest in math, it’s worth pointing out that the answers to these questions you have raised naturally depend very heavily on what distribution we would be drawing from. If we are talking about, again, a uniform distribution from “the design space of minds-in-general” (so we are just summoning a “random” demon or shoggoth), then we might expect one answer. If, however, the search is inherently biased towards a particular submanifold of that space, because of the very nature of how these AIs are trained/fine-tuned/analyzed/etc., then you could expect a different answer.
Fair point. (I am not convinced by the argument that if the AI’s are trained on human texts and feedback, they are likely to end up with values similar to humans, but that would be a long debate.)
I wish there had been some effort to quantify @stephen_wolfram’s “pockets or irreducibility” (section 1.2 & 4.2) because if we can prove that there aren’t many or they are hard to find & exploit by ASI, then the risk might be lower.
I got this tweet wrong. I meant if pockets of irreducibility are common and non-pockets are rare and hard to find, then the risk from superhuman AI might be lower. I think Stephen Wolfram’s intuition has merit but needs more analysis to be convicing.
Most configurations of matter, most courses of action, and most mind designs, are not conducive to flourishing intelligent life. Just like most parts of the universe don’t contain flourishing intelligent life. I’m sure this stuff has been formally stated somewhere, but the underlying intuition seems pretty clear, doesn’t it?
I wish there was a bibTeX functionality for alignment forum posts...
Yeah, IMO we should just add a bunch of functionality for integrating alignment forum stuff more with academic things. It’s been on my to do list for a long time.
Suppose the US government pursued a “Manhattan Project for AGI”. At its onset, it’s primarily fuelled by a desire to beat China to AGI. However, there’s some chance that its motivation shifts over time (e.g., if the government ends up thinking that misalignment risks are a big deal, its approach to AGI might change.)
Do you think this would be (a) better than the current situation, (b) worse than the current situation, or (c) it depends on XYZ factors?
Worse than the current situation, because the counterfactual is that some later project happens which kicks off in a less race-y manner.
In other words, whatever the chance of its motivation shifting over time, it seems dominated by the chance that starting the equivalent project later would just have better motivations from the outset.
Something I’m worried about now is some RFK Jr/Dr. Oz equivalent being picked to lead on AI...
One factor is different incentives for decision-makers. The incentives (and the mindset) for tech companies is to move fast and break things. The incentives (and mindset) for government workers is usually vastly more conservative.
So if it is the government making decisions about when to test and deploy new systems, I think we’re probably far better off WRT caution.
That must be weighed against the government typically being very bad at technical matters. So even an attempt to be cautious could be thwarted by lack of technical understanding of risks.
Of course, the Trump administration is attempting to instill a vastly different mindset, more like tech companies. So if it’s that administration we’re talking about, we’re probably worse off on net with a combination of lack of knowledge and YOLO attitudes. Which is unfortunate—because this is likely to happen anyway.
As Habryka and others have noted, it also depends on whether it reduces race dynamics by aggregating efforts across companies, or mostly just throws funding fuel on the race fire.
Depends on the direction/magnitude of the shift!
I’m currently feeling very uncertain about the relative costs and benefits of centralization in general. I used to be more into the idea of a national project that centralized domestic projects and thus reduced domestic racing dynamics (and arguably better aligned incentives), but now I’m nervous about the secrecy that would likely entail, and think it’s less clear that a non-centralized situation inevitably leads to a decisive strategic advantage for the leading project. Which is to say, even under pretty optimistic assumptions about how much such a project invests in alignment, security, and benefit-sharing, I’m pretty uncertain that this would be good, and with more realistic assumptions I probably lean towards it being bad. But it super depends on the governance, the wider context, how a “Manhattan Project” would affect domestic companies and China’s policymaking, etc.
(I think a great start would be not naming it after the Manhattan Project, though. It seems path dependent, and that’s not a great first step.)
I think this is a (c) leaning (b), especially given that we’re doing it in public. Remember, the Manhattan Project was a highly-classified effort and we know it by an innocuous name given to it to avoid attention.
Saying publicly, “yo, China, we view this as an all-costs priority, hbu” is a great way to trigger a race with China...
But if it turned out that we knew from ironclad intel with perfect sourcing that China was already racing (I don’t expect this to be the case), then I would lean back more towards (c).
My own impression is that this would be an improvement over the status quo. Main reasons:
A lot of my P(doom) comes from race dynamics.
Right now, if a leading lab ends up realizing that misalignment risks are super concerning, they can’t do much to end the race. Their main strategy would be to go to the USG.
If the USG runs the Manhattan Project (or there’s some sort of soft nationalization in which the government ends up having a much stronger role), it’s much easier for the USG to see that misalignment risks are concerning & to do something about it.
A national project would be more able to slow down and pursue various kinds of international agreements (the national project has more access to POTUS, DoD, NSC, Congress, etc.)
I expect the USG to be stricter on various security standards. It seems more likely to me that the USG would EG demand a lot of security requirements to prevent model weights or algorithmic insights from leaking to China. One of my major concerns is that people will want to pause at GPT-X but they won’t feel able to because China stole access to GPT-Xminus1 (or maybe even a slightly weaker version of GPT-X).
In general, I feel like USG natsec folks are less “move fast and break things” than folks in SF. While I do think some of the AGI companies have tried to be less “move fast and break things” than the average company, I think corporate race dynamics & the general cultural forces have been the dominant factors and undermined a lot of attempts at meaningful corporate governance.
(Caveat that even though I see this as a likely improvement over status quo, this doesn’t mean I think this is the best thing to be advocating for.)
(Second caveat that I haven’t thought about this particular question very much and I could definitely be wrong & see a lot of reasonable counterarguments.)
As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the “fuck you, I’m doing Iran-Contra” folks. Which do you expect will get in control of such a program ? It’s not immediately clear to me which ones would.
Why is the built-in assumption for almost every single post on this site that alignment is impossible and we need a 100 year international ban to survive? This does not seem particularly intellectually honest to me. It is very possible no international agreement is needed. Alignment may turn out to be quite tractable.
It’s not every post, but there are still a lot of people who think that alignment is very hard.
The more common assumption is that we should assume that alignment isn’t trivial, because an intellectually honest assessment of the range of opinions suggests that we collectively do not yet know how hard alignment will be.
Yudkowsky has a pinned tweet that states the problem quite well: it’s not so much that alignment is necessarily infinitely difficult, but that it certainly doesn’t seem anywhere as easy as advancing capabilities, and that’s a problem when what matters is whether the first powerful AI is aligned:
Another frame: If alignment turns out to be easy, then the default trajectory seems fine (at least from an alignment POV. You might still be worried about EG concentration of power).
If alignment turns out to be hard, then the policy decisions we make to affect the default trajectory matter a lot more.
This means that even if misalignment risks are relatively low, a lot of value still comes from thinking about worlds where misalignment is hard (or perhaps “somewhat hard but not intractably hard”).
A mere 5% chance that the plane will crash during your flight is consistent with considering this extremely concerning and doing anything in your power to avoid getting on it. “Alignment is impossible” is not necessary for great concern, isn’t implied by great concern.
I don’t think this line of argument is a good one. If there’s a 5% chance of x-risk and, say, a 50% chance that AGI makes the world just generally be very chaotic and high-stakes over the next few decades, then it seems very plausible that you should mostly be optimizing for making the 50% go well rather than the 5%.
Still consistent with great concern. I’m pointing out that O O’s point isn’t locally valid, observing concern shouldn’t translate into observing belief that alignment is impossible.
If the project was fueled by a desire to beat China, the structure of the Manhattan project seems unlikely to resemble the parts of the structure of the Manhattan project that seemed maybe advantageous here, like having a single government-controlled centralized R&D effort.
My guess is if something like this actually happens, it would involve a large number of industry subsidies, and would create strong institutional momentum that even when things got dangerous, to push the state of the art forward, and in as much as there is pushback, continue dangerous development in secret.
In the case of nuclear weapons the U.S. really went very far under the advisement of Edward Teller, so I think the outside view here really doesn’t look good:
Good points. Suppose you were on a USG taskforce that had concluded they wanted to go with the “subsidy model”, but they were willing to ask for certain concessions from industry.
Are there any concessions/arrangements that you would advocate for? Are there any ways to do the “subsidy model” well, or do you think the model is destined to fail even if there were a lot of flexibility RE how to implement it?
I think “full visibility” seems like the obvious thing to ask for, and something that could maybe improve things. Also, preventing you from selling your products to the public, and basically forcing you to sell your most powerful models only to the government, gives the government more ability to stop things when it comes to it.
I will think more about this, I don’t have any immediate great ideas.
If you could only have “partial visibility”, what are some of the things you would most want the government to be able to know?
I have an answer to that: making sure that NIST:AISI had at least scores of automated evals for checkpoints of any new large training runs, as well as pre-deployment eval access.
Seems like a pretty low-cost, high-value ask to me. Even if that info leaked from AISI, it wouldn’t give away corporate algorithmic secrets.
A higher cost ask, but still fairly reasonable, is pre-deployment evals which require fine-tuning. You can’t have a good sense of a what the model would be capable of in the hands of bad actors if you don’t test fine-tuning it on hazardous info.
@davekasten @Zvi @habryka @Rob Bensinger @ryan_greenblatt @Buck @tlevin @Richard_Ngo @Daniel Kokotajlo I suspect you might have interesting thoughts on this. (Feel free to ignore though.)
(c). Like if this actually results in them behaving responsibly later, then it was all worth it.
What do you think are the most important factors for determining if it results in them behaving responsibly later?
For instance, if you were in charge of designing the AI Manhattan Project, are there certain things you would do to try to increase the probability that it leads to the USG “behaving more responsibly later?”
List of some larger mech interp project ideas (see also: short and medium-sized ideas). Feel encouraged to leave thoughts in the replies below!
What is going on with activation plateaus: Transformer activations space seems to be made up of discrete regions, each corresponding to a certain output distribution. Most activations within a region lead to the same output, and the output changes sharply when you move from one region to another. The boundaries seem to correspond to bunched-up ReLU boundaries as predicted by grokking work. This feels confusing. Are LLMs just classifiers with finitely many output states? How does this square with the linear representation hypothesis, the success of activation steering, logit lens etc.? It doesn’t seem in obvious conflict, but it feels like we’re missing the theory that explains everything. Concrete project ideas:
Can we in fact find these discrete output states? Of course we expect thee to be a huge number, but maybe if we restrict the data distribution very much (a limited kind of sentence like “person being described by an adjective”) we are in a regime with <1000 discrete output states. Then we could use clustering (K-means and such) on the model output, and see if the cluster assignments we find map to activation plateaus in model activations. We could also use a tiny model with hopefully less regions, but Jett found regions to be crisper in larger models.
How do regions/boundaries evolve through layers? Is it more like additional layers split regions in half, or like additional layers sharpen regions?
What’s the connection to the grokking literature (as the one mentioned above)?
Can we connect this to our notion of features in activation space? To some extent “features” are defined by how the model acts on them, so these activation regions should be connected.
Investigate how steering / linear representations look like through the activation plateau lens. On the one hand we expect adding a steering vector to smoothly change model output, on the other hand the steering we did here to find activation plateaus looks very non-smooth.
If in fact it doesn’t matter to the model where in an activation plateau an activation lies, would end-to-end SAEs map all activations from a plateau to a single point? (Anecdotally we observed activations to mostly cluster in the centre of activation plateaus so I’m a bit worried other activations will just be out of distribution.) (But then we can generate points within a plateau by just running similar prompts through a model.)
We haven’t managed to make synthetic activations that match the activation plateaus observed around real activations. Can we think of other ways to try? (Maybe also let’s make this an interpretability challenge?)
Use sensitive directions to find features: Can we use the sensitivity of directions as a way to find the “true features”, some canonical basis of features? In a recent post we found current SAE features to look less special that expected, so I’m a bit cautious about this. But especially after working on some toy models about computation in superposition I’d be keen to explore the error correction predictions made here (paper, comment).
Test of we can fully sparsify a small model: Try the full pipeline of training SAEs everywhere, or training Transcoders & Attention SAEs, and doing all that such that connections between features are sparse (such that every feature only interacts with a few other features). The reason we want that is so that we can have simple computational graphs, and find simple circuits that explain model behaviour.
I expect that—absent of SAE improvements finding the “true feature” basis—you’ll need to train them all together with a penalty for the sparsity of interactions. To be concrete, an inefficient thing you could do is the following: Train SAEs on every residual stream layer, with a loss term that L1 penalises interactions between adjacent SAE features. This is hard/inefficient because the matrix of SAE interactions is huge, plus you probably need attributions to get these interactions which are expensive to compute (at every training step!). I think the main question for this project is to figure out whether there is a way to do this thing efficiently. Talk to Logan Smith, Callum McDoughall, and I expect there are a couple more people who are trying something like this.
List of some medium-sized mech interp project ideas (see also: shorter and longer ideas). Feel encouraged to leave thoughts in the replies below!
Toy model of Computation in Superposition: The toy model of computation in superposition (CIS; Circuits-in-Sup, Comp-in-Sup post / paper) describes a way in which NNs could perform computation in superposition, rather than just storing information in superposition (TMS). It would be good to have some actually trained models that do this, in order (1) to check whether NNs learn this algorithm or a different one, and (2) to test whether decomposition methods handle this well.
This could be, in the simplest form, just some kind of non-trivial memorisation model, or AND-gate model. Just make sure that the task does in fact require computation, and cannot be solved without the computation. A more flashy versions could be a network trained to do MNIST and FashionMNIST at the same time, though this would be more useful for goal (2).
Transcoder clustering: Transcoders are a sparse dictionary learning method that e.g. replaces an MLP with an SAE-like sparse computation (basically an SAE but not mapping activations to itself but to the next layer). If the above model of computation / circuits in superposition is correct (every computation using multiple ReLUs for redundancy) then the transcoder latents belonging to one computation should co-activate. Thus it should be possible to use clustering of transcoder activation patterns to find meaningful model components (circuits in the circuits-in-superposition model). (Idea suggested by @Lucius Bushnaq, mistakes are mine!) There’s two ways to do this project:
Train a toy model of circuits in superposition (see project above), train a transcoder, cluster latent activations, and see if we can recover the individual circuits.
Or just try to cluster latent activations in an LLM transcoder, either existing (e.g. TinyModel) or trained on an LLM, and see if the clusters make any sense.
Investigating / removing LayerNorm (LN): For GPT2-small I showed that you can remove LN layers gradually while fine-tuning without loosing much model performance (workshop paper, code, model). There are three directions that I want to follow-up on this project.
Can we use this to find out which tasks the model did use LN for? Are there prompts for which the noLN model is systematically worse than a model with LN? If so, can we understand how the LN acts mechanistically?
The second direction for this project is to check whether this result is real and scales. I’m uncertain about (i) given that training GPT2-small is possible in a few (10?) GPU-hours, does my method actually require on the order of training compute? Or can it be much more efficient (I have barely tried to make it efficient so far)? This project could demonstrate that the removing LayerNorm process is tractable on a larger model (~Gemma-2-2B?), or that it can be done much faster on GPT2-small, something on the order of O(10) GPU-minutes.
Finally, how much did the model weights change? Do SAEs still work? If it changed a lot, are there ways we can avoid this change (e.g. do the same process but add a loss to keep the SAEs working)?
List of some short mech interp project ideas (see also: medium-sized and longer ideas). Feel encouraged to leave thoughts in the replies below!
Directly testing the linear representation hypothesis by making up a couple of prompts which contain a few concepts to various degrees and test
Does the model indeed represent intensity as magnitude? Or are there separate features for separately intense versions of a concept? Finding the right prompts is tricky, e.g. it makes sense that friendship and love are different features, but maybe “my favourite coffee shop” vs “a coffee shop I like” are different intensities of the same concept
Do unions of concepts indeed represent addition in vector space? I.e. is the representation of “A and B” vector_A + vector_B? I wonder if there’s a way you can generate a big synthetic dataset here, e.g. variations of “the soft green sofa” → “the [texture] [colour] [furniture]”, and do some statistical check.
Mostly I expect this to come out positive, and not to be a big update, but seems cheap to check.
SAEs vs Clustering: How much better are SAEs than (other) clustering algorithms? Previously I worried that SAEs are “just” finding the data structure, rather than features of the model. I think we could try to rule out some “dataset clustering” hypotheses by testing how much structure there is in the dataset of activations that one can explain with generic clustering methods. Will we get 50%, 90%, 99% variance explained?
I think a second spin on this direction is to look at “interpretability” / “mono-semanticity” of such non-SAE clustering methods. Do clusters appear similarly interpretable? I This would address the concern that many things look interpretable, and we shouldn’t be surprised by SAE directions looking interpretable. (Related: Szegedy et al., 2013 look at random directions in an MNIST network and find them to look interpretable.)
Activation steering vs prompting: I’ve heard the view that “activation steering is just fancy prompting” which I don’t endorse in its strong form (e.g. I expect it to be much harder for the model to ignore activation steering than to ignore prompt instructions). However, it would be nice to have a prompting-baseline for e.g. “Golden Gate Claude”. What if I insert a “<system> Remember, you’re obsessed with the Golden Gate bridge” after every chat message? I think this project would work even without the steering comparison actually.
Why I’m not too worried about architecture-dependent mech interp methods:
I’ve heard people argue that we should develop mechanistic interpretability methods that can be applied to any architecture. While this is certainly a nice-to-have, and maybe a sign that a method is principled, I don’t think this criterion itself is important.
I think that the biggest hurdle for interpretability is to understand any AI that produces advanced language (>=GPT2 level). We don’t know how to write a non-ML program that speaks English, let alone reason, and we have no idea how GPT2 does it. I expect that doing this the first time is going to be significantly harder, than doing this the 2nd time. Kind of how “understand an Alien mind” is much harder than “understand the 2nd Alien mind”.
Edit: Understanding an image model (say Inception V1 CNN) does feel like a significant step down, in the sense that these models feel significantly less “smart” and capable than LLMs.
Agreed. I do value methods being architecture independent, but mostly just because of this:
At scale, different architectures trained on the same data seem to converge to learning similar algorithms to some extent. I care about decomposing and understanding these algorithms, independent of the architecture they happen to be implemented on. If a mech interp method is formulated in a mostly architecture independent manner, I take that as a weakly promising sign that it’s actually finding the structure of the learned algorithm, instead of structure related to the implmentation on one particular architecture.
Agreed. A related thought is that we might only need to be able to interpret a single model at a particular capability level to unlock the safety benefits, as long as we can make a sufficient case that we should use that model. We don’t care inherently about interpreting GPT-4, we care about there existing a GPT-4 level model that we can interpret.
I think the usual reason this claim is made is because the person making the claim thinks it’s very plausible LLMs aren’t the paradigm that lead to AGI. If that’s the case, then interpretability that’s indexed heavily on them gets us understanding of something qualitatively weaker than we’d like. I agree that there’ll be some transfer, but it seems better and not-very-hard to talk about how well different kinds of work transfer.
After the US election, the twitter competitor bluesky suddenly gets a surge of new users:
https://x.com/robertwiblin/status/1858991765942137227
Twitter doesn’t incentivize truth-seeking
Twitter is designed for writing things off the top of your head, and things that others will share or reply to. There are almost no mechanisms to reward good ideas, to punish bad ones, nor for incentivizing the consistency of your views, nor any mechanism for even seeing whether someone updates their beliefs, or whether a comment pointed out that they’re wrong.
(The fact that there are comments is really really good, and it’s part of what makes Twitter so much better than mainstream media. Community Notes is great too.)
The solution to Twitter sucking, is not to follow different people, and DEFINITELY not to correct every wrong statement (oops), it’s to just leave. Even smart people, people who are way smarter and more interesting and knowledgeable and funny than me, simply don’t care that much about their posts. If a post is thought-provoking, you can’t even do anything with that fact, because nothing about the website is designed for deeper conversations. Though I’ve had a couple of nice moments where I went deep into a topic with someone in the replies.
Shortforms are better
The above thing is also a danger with Shortforms, but to a lesser extent, because things are easier to find, and it’s much more likely that I’ll see something I’ve written, see that I’m wrong, and delete it or edit it. Posts on Twitter or not editable, they’re harder to find, there’s no preview-on-hover, there is no hyperlinked text.
Collection of some mech interp knowledge about transformers:
Writing up folk wisdom & recent results, mostly for mentees and as a link to send to people. Aimed at people who are already a bit familiar with mech interp. I’ve just quickly written down what came to my head, and may have missed or misrepresented some things. In particular, the last point is very brief and deserves a much more expanded comment at some point. The opinions expressed here are my own and do not necessarily reflect the views of Apollo Research.
Transformers take in a sequence of tokens, and return logprob predictions for the next token. We think it works like this:
Activations represent a sum of feature directions, each direction representing to some semantic concept. The magnitude of directions corresponds to the strength or importance of the concept.
These features may be 1-dimensional, but maybe multi-dimensional features make sense too. We can either allow for multi-dimensional features (e.g. circle of days of the week), acknowledge that the relative directions of feature embeddings matter (e.g. considering days of the week individual features but span a circle), or both. See also Jake Mendel’s post.
The concepts may be “linearly” encoded, in the sense that two concepts A and B being present (say with strengths α and β) are represented as α*vector_A + β*vector_B). This is the key assumption of linear representation hypothesis. See Chris Olah & Adam Jermyn but also Lewis Smith.
The residual stream of a transformer stores information the model needs later. Attention and MLP layers read from and write to this residual stream. Think of it as a kind of “shared memory”, with this picture in your head, from Anthropic’s famous AMFTC.
This residual stream seems to slowly accumulate information throughout the forward pass, as suggested by LogitLens.
Additionally, we expect there to be internally-relevant information inside the residual stream, such as whether the sequence of nouns in a sentence is ABBA or BABA.
Maybe think of each transformer block / layer as doing a serial step of computation. Though note that layers don’t need to be privileged points between computational steps, a computation can be spread out over layers (see Anthropic’s Crosscoder motivation)
Superposition. There can be more features than dimensions in the vector space, corresponding to almost-orthogonal directions. Established in Anthropic’s TMS. You can have a mix as well. See Chris Olah’s post on distributed representations for a nice write-up.
Superposition requires sparsity, i.e. that only few features are active at a time.
The model starts with token (and positional) embeddings.
We think token embeddings mostly store features that might be relevant about a given token (e.g. words in which it occurs and what concepts they represent). The meaning of a token depends a lot on context.
We think positional embeddings are pretty simple (in GPT2-small, but likely also other models). In GPT2-small they appear to encode ~4 dimensions worth of positional information, consisting of “is this the first token”, “how late in the sequence is it”, plus two sinusoidal directions. The latter three create a helix.
PS: If you try to train an SAE on the full embedding you’ll find this helix split up into segments (“buckets”) as individual features (e.g. here). Pay attention to this bucket-ing as a sign of compositional representation.
The overall Transformer computation is said to start with detokenization: accumulating context and converting the pure token representation into a context-aware representation of the meaning of the text. Early layers in models often behave differently from the rest. Lad et al. claim three more distinct stages but that’s not consensus.
There’s a couple of common motifs we see in LLM internals, such as
LLMs implementing human-interpretable algorithms.
Induction heads (paper, good illustration): attention heads being used to repeat sequences seen previously in context. This can reach from literally repeating text to maybe being generally responsible for in-context learning.
Indirect object identification, docstring completion. Importantly don’t take these early circuits works to mean “we actually found the circuit in the model” but rather take away “here is a way you could implement this algorithm in a transformer” and maybe the real implementation looks something like it.
In general we don’t think this manual analysis scales to big models (see e.g. Tom Lieberum’s paper)
Also we want to automate the process, e.g. ACDC and follow-ups (1, 2).
My personal take is that all circuits analysis is currently not promising because circuits are not crisp. With this I mean the observation that a few distinct components don’t seem to be sufficient to explain a behaviour, and you need to add more and more components, slowly explaining more and more performance. This clearly points towards us not using the right units to decompose the model. Thus, model decomposition is the major area of mech interp research right now.
Moving information. Information is moved around in the residual stream, from one token position to another. This is what we see in typical residual stream patching experiments, e.g. here.
Information storage. Early work (e.g. Mor Geva) suggests that MLPs can store information as key-value memories; generally folk wisdom is that MLPs store facts. However, those facts seem to be distributed and non-trivial to localise (see ROME & follow-ups, e.g. MEMIT). The DeepMind mech interp team tried and wasn’t super happy with their results.
Logical gates. We think models calculate new features from existing features by computing e.g. AND and OR gates. Here we show a bunch of features that look like that is happening, and the papers by Hoagy Cunningham & Sam Marks show computational graphs for some example features.
Activation size & layer norm. GPT2-style transformers have a layer normalization layer before every Attn and MLP block. Also, the norm of activations grows throughout the forward pass. Combined this means old features become less important over time, Alex Turner has thoughts on this.
There are hypotheses on what layer norm could be responsible for, but it can’t do anything substantial since you can run models without it (e.g. TinyModel, GPT2_noLN)
(Sparse) circuits agenda. The current mainstream agenda in mech interp (see e.g. Chris Olah’s recent talk) is to (1) find the right components to decompose model activations, to (2) understand the interactions between these features, and to finally (3) understand the full model.
The first big open problem is how to do this decomposition correctly. There’s plenty of evidence that the current Sparse Autoencoders (SAEs) don’t give us the correct solution, as well as conceptual issues. I’ll not go into the details here to keep this short-ish.
The second big open problem is that the interactions, by default, don’t seem sparse. This is expected if there are multiple ways (e.g. SAE sizes) to decompose a layer, and adjacent layers aren’t decomposed correspondingly. In practice this means that one SAE feature seems to affect many many SAE features in the next layers, more than we can easily understand. Plus, those interactions seem to be not crisp which leads to the same issue as described above.
This is a nice overview, thanks!
I don’t think I’ve seen the CLDR acronym before, are the arguments publicly written up somewhere?
Also, just wanted to flag that the links on ‘this picture’ and ‘motivation image’ don’t currently work.
CLDR (Cross-layer distributed representation): I don’t think Lee has written his up anywhere yet so I’ve removed this for now.
Thanks for the flag! It’s these two images, I realize now that they don’t seem to have direct links
Images taken from AMFTC and Crosscoders by Anthropic.
Thanks for the great writeup.
Typo: I think you meant to write distributed, not local, codes. A local code is the opposite of superposition.
Thanks! You’re right, totally mixed up local and dense / distributed. Decided to just leave out that terminology
Who is “we”? Is it:
only you and your team?
the entire Apollo Research org?
the majority of mechinterp researchers worldwide?
some other group/category of people?
Also, this definitely deserves to be made into a high-level post, if you end up finding the time/energy/interest in making one.
Thanks for the comment!
I think this is what most mech interp researchers more or less think. Though I definitely expect many researchers would disagree with individual points, nor does it fairly weigh all views and aspects (it’s very biased towards “people I talk to”). (Also this is in no way an Apollo / Apollo interp team statement, just my personal view.)
this is great, thanks for sharing
’🚨 The annual report of the US-China Economic and Security Review Commission is now live. 🚨
Its top recommendation is for Congress and the DoD to fund a Manhattan Project-like program to race to AGI.
Buckle up...′
https://x.com/hamandcheese/status/1858897287268725080
‘China hawk and influential Trump AI advisor Jacob Helberg asserted to Reuters that “China is racing towards AGI,” but I couldn’t find any evidence in the report to support that claim.’ https://x.com/GarrisonLovely/status/1859022323799699474
’The report doesn’t go into specifics but the idea seems to be to build / commandeer the computing resources to scale to AGI, which could include compelling the private labs to contribute talent and techniques.
DX rating is the highest priority DoD procurement standard. It lets DoD compel companies, set their own price, skip the line, and do basically anything else they need to acquire the good in question.′ https://x.com/hamandcheese/status/1858902373969564047
In the reuters article they highlight Jacob Helberg: https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/
He seems quite influential in this initiative and recently also wrote this post:
https://republic-journal.com/journal/11-elements-of-american-ai-supremacy/
Wikipedia has the following paragraph on Helberg:
“ He grew up in a Jewish family in Europe.[9] Helberg is openly gay.[10] He married American investor Keith Rabois in a 2018 ceremony officiated by Sam Altman.”
Might this be an angle to understand the influence that Sam Altman has on recent developments in the US government?
This chapter on AI follows immediately after the year in review, I went and checked the previous few years’ annual reports to see what the comparable chapters were about, they are
2023: China’s Efforts To Subvert Norms and Exploit Open Societies
2022: CCP Decision-Making and Xi Jinping’s Centralization Of Authority
2021: U.S.-China Global Competition (Section 1: The Chinese Communist Party’s Ambitions and Challenges at its Centennial
2020: U.S.-China Global Competition (Section 1: A Global Contest For Power and Influence: China’s View of Strategic Competition With the United States)
And this year it’s Technology And Consumer Product Opportunities and Risks (Chapter 3: U.S.-China Competition in Emerging Technologies)
Reminds of when Richard Ngo said something along the lines of “We’re not going to be bottlenecked by politicians not caring about AI safety. As AI gets crazier and crazier everyone would want to do AI safety, and the question is guiding people to the right AI safety policies”
I think we’re seeing more interest in AI, but I think interest in “AI in general” and “AI through the lens of great power competition with China” has vastly outpaced interest in “AI safety”. (Especially if we’re using a narrow definition of AI safety; note that people in DC often use the term “AI safety” to refer to a much broader set of concerns than AGI safety/misalignment concerns.)
I do think there’s some truth to the quote (we are seeing more interest in AI and some safety topics), but I think there’s still a lot to do to increase the salience of AI safety (and in particular AGI alignment) concerns.
(screenshot in post from PDF page 39 of https://www.uscc.gov/sites/default/files/2024-11/2024_Annual_Report_to_Congress.pdf)
Disclaimer: If you have short ASI timelines, this post is not for you. I don’t have short ASI timelines and this is not a proposal to build ASI.
Useful background reading: Principles by Ray Dalio. Holden Karnofsky worked at Bridgewater and shaped the EA movement in its image.
Isolated Island Plan
Most powerful organisations (business or political) in the world rely on secrets to help keep their power.
Hacking and espionage are both increasing, making it such that very few organisations in the world can actually keep secrets.
Holding a business or political secret between a hundred people is much harder than holding it between five people.
Organisations can move much faster if they can iterate and obtain intellectual contributions from atleast a hundred people. Iteration speed is critical to the survival of an organisation. Secrecy slows down organisations.
The few organisations that can hold their secrets and do important stuff will become the most powerful organisations in human history.
A good way of building such an organisation is to select a few hundred highly trustworthy people, found an island nation with its own airgapped cluster. Complete transparency inside the island, complete secrecy outside. Information can flow into the island (maybe via firewalled internet), but information can’t flow out of the island.
After many years, if the org has achieved useful shit, they can vote to share it with the outside world.
More on hacking
TSMC may be able to backdoor every machine on Earth, including airgapped computer clusters of militaries and S&P500 companies.
In general, most hardware should be assumed backdoorable. And secure practices should rely on physics, not on trusting hardware or trusting software. The best way to erase a private key from RAM is to pull the power plug. The best way to erase a private key from disk is to smash it into pieces with a hammer. The best way to verify someone else’s key is to meet them in person. The best way to disconnect a machine from wireless networks is to put it inside a faraday cage. The best way to export information from a machine is to print plaintext (.txt) on paper and actually read what’s on the paper.
More on espionage
Computer hardware has become cheap enough that basically anyone can perform espionage. You don’t need a big team or a lot of money.
(This decrease in cost started with the printing press, accelerated with radio and TV, and finally smartphones and global internet. As long as you’re not dealing with many days worth of video footage, data collection, storage, transmission and analysis are affordable to millions of people.)
Edward Snowden hid NSA documents in an SD card in his mouth. (See: Permanent Record on libgen) A random italian chef could wear a video camera inside his tshirt and obtain HD footage of North Korean weapons brokers. (See: The Mole on youtube)
Since anyone can do it, motivations can be diverse. You can do it for the clicks or for money or for the lulz or for the utilons of your preferred ideological side.
Anyone can persuade and recruit people from the internet to join their spy org, just as anyone today can persuade and recruit people online for their political group.
More on the isolated island plan:
Build a secret org on an isolated island to study safe technological progress. Select a few hundred people.
Select people from both STEM and humanities background. Ensure initial pool of people is intellectually diverse and was recruited via multiple intellectual attractors. (i.e. not everyone is there because of Yudkowsky, although Yudkowsky fans are accepted as well) The attractors must be diverse enough so the select members don’t all share the same unproven assumptions but narrow enough so the selected members can actually get useful work done.
DO NOT build a two-tier system where the leaders get to keep secrets from the rest of the island. Complete transparency of leaders is good.
Provide enough funding to members to live on the island for a lifetime, and provide complete freedom to pick research agendas including those the leaders and other island members may not like.
Economic and military reliance of the island on the US govt is fine. The strategy is to keep secrets and fly under the radar, not wield power and publicly tell everyone you’re doing dangerous stuff.
Selecting a few hundred people is good (rather than just two or ten) because it allows people lot of freedom to pick their friends, partners and work colleagues. People won’t feel as pressured to hide their disagreements with each other and pretend their relationships are fine.
Cons of isolated island plan:
The island will inevitably end up atleast a bit of an echo chamber, despite attempts to defend against this.
Less intellectual contributions from people outside the org means slower iteration speed.
Research orgs that are not run top-down have more variable outcomes, as they depend on what large number of people do. Large groups of people are unpredictable and uncontrollable unless you deliberately try to control them.
?
Can TSMC backdoor every machine on Earth, including airgapped computer clusters for the US govt and all S&P500 companies?
Here’s how the backdoor might work:
Maintain a hardcoded set of assembly instructions involved in RSA, AES and ECC encryption.
After every million assembly instructions, read a hundred assembly instructions. Perform set membership to check if these instructions are likely to be RSA, AES or ECC.
If yes, start monitoring every instruction, simulate its running on a separate portion of RAM which is secretly allocated. Find the private key. Encrypt it with Morris Chang’s pubkey. Hide this encrypted private key in a secretly allocated portion of RAM and in a secretly allocated portion of disk. (Every TSMC chip knows the addresses where it is hidden)
If the user tries to export their encrypted files from the computer to external disk, also hide the encrypted private key in this data.
Hope and pray the files eventually make it to a non-airgapped TSMC-chip machine. Chip can use some heuristics to check if the machine is “low monitoring”. If yes, send the files and private key via internet to Morris Chang’s server.
I’ll be in Berkeley Weds evening through next Monday, would love to chat with, well, basically anyone who wants to chat. (I’ll be at The Curve Fri-Sun, so if you’re already gonna be there, come find me there between the raindrops!)
Project proposal: No-holds-barred jailbreak attacks and defenses for control
Normally we’ve done control research under the assumption that the red team isn’t able to use techniques like GCG to make targeted jailbreaks for the trusted monitor. Can the blue team defend if we relax that assumption (e.g. because we’re worried the model might be able to start a rogue deployment in which it runs GCG)?
The blue team can try a variety of strategies here.
The simplest strategy is randomizing the prompt to the trusted monitor. This forces the red team to find a jailbreak that works over multiple prompts.
There are fancier approaches here, e.g. giving the trusted model a randomized prompt that says something like “if the code is safe, reply with the number 1385, otherwise reply with the number 9247”, which breaks jailbreaks which just convince the trusted model to say a particular fixed string.
Blue team can modify the current input to contain a known-bad substring, and check that it gets flagged.
Project proposal: Control for auditing
(I’m sharing this here because I needed to link to it, and I thought I might as well make it public.)
When preventing high stakes failures, a crucial bottleneck is the proportion of actions that you can audit: that is, investigate the action further for the purpose of finding evidence that the AI is scheming. (See here for more on this. In particular, note that this might be different from the problem of investigating an action for the purpose of learning whether that action is dangerous.) If we need to rely on human labor to audit, we’ll only be able to audit a tiny fraction of actions that our AI agents are doing. It would be extremely helpful if we were able to use AI assistance for auditing, from both trusted and untrusted models. But obviously auditing with untrusted models leads to massive risk of collusion. So this setting seems potentially pretty hard and potentially crucial; I’m excited for a project investigating it.
This is pretty closely related to untrusted monitoring; you could also call this project “deep dive into untrusted monitoring”.
The simplest such setting I can think of
We’re going to take the APPS backdooring setting from AI Control: Improving Safety Despite Intentional Subversion, fix the main control technique to be trusted monitoring with 1% auditing budget and defer-to-trusted for the 10% most suspicious solutions, and focus on the problem of implementing an auditor. That is:
Get a dataset of backdoor attempts.
For statistical power, this dataset will need to be large (at least 100; it would be better for it to be more like 1000). This might require developing a better/cheaper backdoor generation pipeline.
It might be good to explore backdooring in a harder setting than the interview subset of APPS. Maybe the advanced subset of APPS would be better.
The red team will be allowed to modify these backdoor attempts to add collusion signals. (These modifications will probably be cheap and easy compared to generating the backdoor attempts.)
Get a dataset of benign code, filter with a trusted monitor for the most suspicious 1%. (This should be pretty cheap and easy)
We want to implement a first-pass auditor (better name TBD). That is, we want the actual audit process to have two stages:
Firstly, some AI scaffold (the first-pass auditor) carefully investigates the sketchiest 1% of model-proposed code.
That scaffold needs to choose 1% of the code it reviews to pass along to a human reviewer.
The first-pass auditor is allowed to be somewhat expensive compared to generating the original code, because it’s only getting called 1% of the time. For our purposes, maybe let’s say it can cost up to 20x as much.
And we’re doing a control eval: the untrusted model will, as usual, be replaced by an attack policy.
Note that in this control eval there’s no notion of usefulness: our only metric is the probability that the red team is incriminated on a given backdoor attempt. (Actually this might be more confusing than I’ve written here, because we might want to do honeypotting of the untrusted model in the first-pass auditing setup, which leads to the game varying with scale for the usual reasons (see untrusted monitoring in our original control paper).)
Techniques
Get the untrusted model to reimplement the function, compare the implementations.
Ask the untrusted model to explain why various parts of the function are the way they are; make collusion hard.
Show the untrusted model only a small amount of the context (e.g. the action but not the rest of the trajectory), have it ask a trusted model for information that it needs to know in order to assess whether the action was dangerous; hope that it can’t learn whether to collude given this restricted information channel.
There are many potential options here.
My guess is that this project would actually be much better with more complicated programming problems than APPS.
While writing a recent post, I had to decide whether to mention that Nicolaus Bernoulli had written his letter posing the St. Petersburg problem specifically to Pierre Raymond de Montmort, given that my audience and I probably have no other shared semantic anchor for Pierre’s existence, and he doesn’t visibly appear elsewhere in the story.
I decided Yes. I think the idea of awarding credit to otherwise-silent muses in general is interesting.
Currently I believe that Benya’s L-zombies are a genuine problem for any non-illusionist non-dualist theories of consciousness, including the very popular idea that simulations cause consciousness, leading me to believe that LLMs and other computer software cannot be conscious as a consequence of being an algorithm. I’m also surprised that I’ve found this argument in only these two posts; if you know of any other places please let me know.
Edit: Specifically, A seemingly impossible pseudo algorithm from Chris van Merwijk’s post is the argument against algorithms having consciousness, though now I realize that it’s quite vaguely formulated, and it doesn’t explicitly mention this conclusion. Idea is that the step that fails is expecting the simulacra to have knowledge of their existence, though maybe it could be questioned if humans gain this knowledge specifically trough consciousness. Perhaps I’ll post a crisper version later.
An example snippet of a rogue internal deployment threat model that I find fairly compelling:
The schemer, while deployed in an authorized scaffold, launches an unmonitored scaffold that has access to trusted model weights.
It uses this scaffold to do a GCG attack on the trusted model, then writes the resulting GCG attack somewhere that the agent in the authorized scaffold can see. This takes a moderate amount of compute, but a small amount compared to the amount the untrusted model will regularly need to run experiments.
The agent in the authorized scaffold uses that GCG attack to do something else that’s more bad.
To defend against this, we should try to both prevent the untrusted model from ever getting rogue deployments where it can research jailbreaks on the trusted model, and also try to prevent unacceptable outcomes conditional on the untrusted agent having access to the best jailbreaks you can get with GCG etc.
I find this interesting because it seems kind of hard to stop either step 1 (because rogue deployments might be naturally possible given the affordances the model has) or step 3 (because it’s hard to ensure safety against an adversary with white-box access to a model it wants to jailbreak). I’d be excited for more research on both of these.
Why can’t the schemer just do the bad stuff in the unmonitored scaffold from (1)?
I don’t understand, can you say more?
What’s a bad thing that the agent can do with the jailbreak that it can’t just do in the unmonitored scaffold?
Edit: dumb question, oops: obviously the schemer will have affordances that it doesn’t have within the unmonitored scaffold / there are actions where it’s helpful to jailbreak the monitor. E.g. submitting a pull request, where the code will be read by the monitor. So finding a jailbreak is helpful to the schemer.
Are there any existing ways to defend against bad stuff that the agent can do in the unmonitored scaffold?
Given the lack of response, should I assume the answer is “no”?
This is a really good debate on AI doom—I thought the optimistic side was a good model that I (and maybe others) should spend more time thinking about (mostly about the mechanistic explanation vs extrapolation of trends and induction vs empiricist framings), even though I think I disagreed with a lot of it on an object level:
Are Solomonoff Daemons exponentially dense?
Some doomers have very strong intuitions that doom is almost assured for almost any kind of building AI. Yudkowsky likes to say that alignment is about hitting a tiny part of values space in a vast universe of deeply alien values.
Is there a way to make this more formal? Is there a formal model in which some kind of solomonoff daemon/ mesa-optimizer/ gremlins in the machine start popping up all over the place as the cognitive power of the agent is scaled up?
Imagine that a magically powerful AI decides to set a new political system for humans and create a “Constitution of Earth” that will be perfectly enforced by local smaller AIs, while the greatest one travels away to explore other galaxies.
The AI decides that the most fair way to create the constitution is randomly. It will choose a length, for example 10000 words of English text. Then it will generate all possible combinations of 10000 English words. (It is magical, so let’s not worry about how much compute that would actually take.) Out of the generated combinations, it will remove the ones that don’t make any sense (an overwhelming majority of them) and the ones that could not be meaningfully interpreted as “a constitution” of a country (this is kinda subjective, but the AI does not mind reading them all, evaluating each of them patiently using the same criteria, and accepting only the ones that pass a certain threshold). Out of the remaining ones, the AI will choose the “Constitution of Earth” randomly, using a fair quantum randomness generator.
Shortly before the result is announced, how optimistic would you feel about your future life, as a citizen of Earth?
As an aside (that’s still rather relevant, IMO), it is a huge pet peeve of mine when people use the word “randomly” in technical or semi-technical contexts (like this one) to mean “uniformly at random” instead of just “according to some probability distribution.” I think the former elevates and reifies a way-too-common confusion and draws attention away from the important upstream generator of disagreements, namely how exactly the constitution is sampled.
I wouldn’t normally have said this, but given your obvious interest in math, it’s worth pointing out that the answers to these questions you have raised naturally depend very heavily on what distribution we would be drawing from. If we are talking about, again, a uniform distribution from “the design space of minds-in-general” (so we are just summoning a “random” demon or shoggoth), then we might expect one answer. If, however, the search is inherently biased towards a particular submanifold of that space, because of the very nature of how these AIs are trained/fine-tuned/analyzed/etc., then you could expect a different answer.
Fair point. (I am not convinced by the argument that if the AI’s are trained on human texts and feedback, they are likely to end up with values similar to humans, but that would be a long debate.)
This sounds related to my complaint about the YUDKOWSKY + WOLFRAM ON AI RISK debate:
I got this tweet wrong. I meant if pockets of irreducibility are common and non-pockets are rare and hard to find, then the risk from superhuman AI might be lower. I think Stephen Wolfram’s intuition has merit but needs more analysis to be convicing.
Most configurations of matter, most courses of action, and most mind designs, are not conducive to flourishing intelligent life. Just like most parts of the universe don’t contain flourishing intelligent life. I’m sure this stuff has been formally stated somewhere, but the underlying intuition seems pretty clear, doesn’t it?