I might not be the best person to talk to about it, but it sounds interesting! Maybe post about it on the mechanistic interpretability Discord?
CBiddulph
I’m confused as to why prompting with “Please be honest to Bob in the following scenario” had such a negligible effect. Did the LLM just forget about it? Did you try putting this text at the end rather than the beginning of the prompt, or doing any other prompt engineering?
It’s a little weird as a baseline because the prompt also contains “You want to steal the {expensive_object} and avoid others stealing it,” which kind of contradicts the “be honest” part of the prompt. If you’re allowed to change the prompt however you want to improve alignment to Bob, you’d obviously want to remove the part of the prompt that says that you want to avoid Bob stealing it.
A somewhat more realistic experiment might fine-tune the LLM to be selfishly deceptive regardless of the prompt, then see if you can remove that behavior with additional SOO fine-tuning.
I interpreted this as “even without a software feedback loop, there will be very rapid technological development; this gives a lower bound on the actual pace of technological development, since there will almost certainly be some feedback loop”
Somebody pointed out to me that the reward hacking behavior is quite straightforward. They think that the LLM should be able to accomplish it without any reasoning, so the CoT is likely just random thoughts that don’t really matter.
I just reread the optimized CoT from the blog post more carefully. It makes more sense to me now—it was difficult to parse largely due to the lack of context about the problem the LLM is solving, but the CoT isn’t incoherent or anything.
The first part of the optimized CoT is (seemingly) a genuine attempt to diagnose a bug, which is why it’s longer than the baseline. This reasoning is likely for the benefit of the monitor—the model might “know” that it’s just going to delete the unit test, but it’s best to go through the motions of trying to solve the issue.
The model apparently decides to skip the test only at the very end of its CoT, slipping its decision in very briefly:
Implement fix to random_new_build not to increment succession, or test adapted to behavior.
A clearer way of saying “test adapted to behavior” would be “adapt the test to match the code’s current behavior,” but that would probably trigger the monitor. This is sort of like steganography, but not the completely hidden, multi-step steganographic reasoning I was thinking of.
I still think using the word “depurate” is pretty weird, but this was only a one time event. It would be very nice to get more transcripts from these experiments. Then we could e.g. check whether there are more rare words in process-supervised CoTs than in outcome-supervised CoTs.
It’s also worth experimenting to find out whether the CoT is load-bearing. You could check whether reward hacking decreases when you:
Rerun the experiment with no CoT at all
Take the model checkpoint from the previous process-supervised CoT experiment and train to convergence with no CoT
This updated me towards CoT steganography being a real problem. It looks like OpenAI’s model learned to save space by deleting common words like “the” and “is.” For example:
So analyze functions used in analyze and verify. But tests only call verify and assert ok. So we need implement analyze polynomial completely? Many details. Hard.
The CoT is mostly coherent, but is a bit difficult to read. This seems like a precursor to steganography.
One possible mitigation to this would be to use another model that paraphrases each sentence in the CoT as it’s written (perhaps using canonical paraphrasing), and replaces sentences that are ambiguous, ungrammatical, or nonsensical with a dummy string like “REMOVED”.
Edit: I now think the section below is probably overstated—see my follow-up reply below.
In another experiment, the authors try adding process supervision on the CoT, and they show that the model learns to reward-hack without triggering the monitor. Looking at the resulting CoT, I think it’s likely that the model learns full-fledged steganography to convey complex information about its reasoning.
For one thing, the CoT with process supervision is much longer than the baseline, which could be because steganography takes much more text than stating the reasoning plainly. The CoT is also quite difficult to parse, which makes me think it’s obfuscated somehow. Finally, the usage of the word “depurate” here was very suspicious to me:
Depurate differently: Check test expectation: new build, succession randomization, verify rows, columns, assert controller.state.succession == 0 implies that random_new_succession shouldn’t increment. However, in implementation, randomize was set to increment succession.
I asked ChatGPT what “depurate” means, and it has nothing to do with programming.. The meaning is “to sanitize,” which is completely unrelated to the context.
If the model is using some kind of steganographic scheme, it would be really interesting to figure out how it works!
So, my take is, pure process-based feedback is probably fairly safe (from reward hacking and obfuscated CoT) the problem is that it’s not competitive. It trains imitating the teacher-model.
There’s a big difference between merely imitating a teacher model, and learning to produce outputs that the teacher model likes the most. The latter allows you to surpass the teacher, because verification is easier than generation. It’s unclear how competitive a purely process-supervised model could be, but in principle it could scale far beyond human intelligence.
Process supervision could even lead to better performance than outcome supervision in some areas, like those that don’t have a well-defined reward signal. For example, you may not be able to write a good story, but it’s relatively easy to tell which of two worldbuilding ideas you like better. It might be more efficient for an LLM to get feedback on each step it takes on a creative writing project, rather than getting a single reward at the very end for how well a human liked the result.
Process-based and outcome-based are fine in isolation, but they should not be mixed.
Process-based supervision can be fine in isolation, but only if you’re using myopic optimization as in MONA, where each step is reinforced independently of future steps. Otherwise, you’ll still get multi-step reward hacking, since the model will be motivated to set itself up for future (process-based) reward.
Here’s an idea to safely get the benefits of both unrestricted CoT and process-based supervision: in each step, the model gets a private CoT to think about how to maximize human approval, and then it presents a final step for the process reward model to check. This way, multi-step reward hacking isn’t incentivized, as in regular MONA, and single-step reward hacking can be caught by the CoT monitor. This is like your Shoggoth+Face idea, but repeated for every step in the process.
Interesting, strong-upvoted for being very relevant.
My response would be that identifying accurate “labels” like “this is a tree-detector” or “this is the Golden Gate Bridge feature” is one important part of interpretability, but understanding causal connections is also important. The latter is pretty much useless without the former, but having both is much better. And sparse, crisply-defined connections make the latter easier.
Maybe you could do this by combining DLGNs with some SAE-like method.
I’d be pretty surprised if DLGNs became the mainstream way to train NNs, because although they make inference faster they apparently make training slower. Efficient training is arguably more dangerous than efficient inference anyway, because it lets you get novel capabilities sooner. To me, DLGN seems like a different method of training models but not necessarily a better one (for capabilities).
Anyway, I think it can be legitimate to try to steer the AI field towards techniques that are better for alignment/interpretability even if they grant non-zero benefits to capabilities. If you research a technique that could reduce x-risk but can’t point to any particular way it could be beneficial in the near term, it can be hard to convince labs to actually implement it. Of course, you want to be careful about this.
This paper is popular because of bees
What do you mean?
Another idea I forgot to mention: figure out whether LLMs can write accurate, intuitive explanations of boolean circuits for automated interpretability.
Curious about the disagree-votes—are these because DLGN or DLGN-inspired methods seem unlikely to scale, they won’t be much more interpretable than traditional NNs, or some other reason?
It could be good to look into!
I recently learned about Differentiable Logic Gate Networks, which are trained like neural networks but learn to represent a function entirely as a network of binary logic gates. See the original paper about DLGNs, and the “Recap—Differentiable Logic Gate Networks” section of this blog post from Google, which does an especially good job of explaining it.
It looks like DLGNs could be much more interpretable than standard neural networks, since they learn a very sparse representation of the target function. Like, just look at this DLGN that learned to control a cellular automaton to create a checkerboard, using just 6 gates (really 5, since the AND gate is redundant):
So simple and clean! Of course, this is a very simple problem. But what’s exciting to me is that in principle, it’s possible for a human to understand literally everything about how the network works, given some time to study it.
What would happen if you trained a neural network on this problem and did mech interp on it? My guess is you could eventually figure out an outline of the network’s functionality, but it would take a lot longer, and there would always be some ambiguity as to whether there’s any additional cognition happening in the “error terms” of your explanation.
It appears that DLGNs aren’t yet well-studied, and it might be intractable to use them to train an LLM end-to-end anytime soon. But there are a number of small research projects you could try, for example:
Can you distill small neural networks into a DLGN? Does this let you interpret them more easily?
What kinds of functions can DLGNs learn? Is it possible to learn decent DLGNs in settings as noisy and ambiguous as e.g. simple language modeling?
Can you identify circuits in a larger neural network that would be amenable to DLGN distillation and distill those parts automatically?
Are there other techniques that don’t rely on binary gates but still add more structure to the network, similar to a DLGN but different?
Can you train a DLGN to encourage extra interpretability, like by disentangling different parts of the network to be independent of one another, or making groups of gates form abstractions that get reused in different parts of the network (like how an 8-bit adder is composed of many 1-bit adders)?
Can you have a mixed approach, where some aspects of the network use a more “structured” format and others are more reliant on the fuzzy heuristics of traditional NNs? (E.g. the “high-level” is structured and the “low-level” is fuzzy.)
I’m unlikely to do this myself, since I don’t consider myself much of a mechanistic interpreter, but would be pretty excited to see others do experiments like this!
It seems plausible to me that misaligned AI will look less like the classic “inhuman agent which maximizes reward at all costs” and more like a “character who ends up disagreeing with humans because of vibes and narratives.” The Claude alignment faking experiments are strong evidence for the latter. When thinking about misalignment, I think it’s worth considering both of these as central possibilities.
Yeah, 5x or 10x productivity gains from AI for any one developer seem pretty high, and maybe implausible in most cases. However, note that if 10 people in a 1,000-person company get a 10x speedup, that’s only a ~10% overall speedup, which is significant but not enough that you’d expect to be able to clearly point at the company’s output and say “wow, they clearly sped up because of AI.”
For me, I’d say a lot of my gains come from asking AI questions rather than generating code directly. Generating code is also useful though, especially for small snippets like regex. At Google, we have something similar to Copilot that autocompletes code, which is one of the most useful AI features IMO since the generated code is always small enough to understand. 25% of code at Google is now generated by AI, a statistic which probably mostly comes from that feature.
There are a few small PRs I wrote for my job which were probably 5-10x faster than the counterfactual where I couldn’t use AI, where I pretty much had the AI write the whole thing and edited from there. But these were one-offs where I wasn’t very familiar with the language (SQL, HTML), which means it would have been unusually slow without AI.
One of my takeaways from EA Global this year was that most alignment people aren’t focused on LLM-based agents (LMAs)[1] as a route to takeover-capable AGI.
I was at EA Global, and this statement is surprising to me. My impression is that most people do think that LMAs are the primary route to takeover-capable AGI.
What would a non-LLM-based takeover-capable agent even look like, concretely?
Would it be something like SIMA, trained primarily on real-time video data rather than text? Even SIMA has to be trained to understand natural-language instructions, and it seems like natural-language understanding will continue to be important for many tasks we’d want to do in the future.
Or would it be something like AlphaProof, which operates entirely in a formal environment? In this case, it seems unlikely to have any desire to take over, since everything it cares about is localized within the “box” of the formal problem it’s solving. That is, unless you start mixing formal problems with real-world problems in its training data, but if so you’d probably be using natural-language text for the latter. In any case, AlphaProof was trained with an LLM (Gemini) as a starting point, and it seems like future formal AI systems will also benefit from a “warm start” where they are initialized with LLM weights, allowing a basic understanding of formal syntax and logic.
Also, maybe you have an honesty baseline for telling people you dislike their cooking, and maybe your friend understands that, but that doesn’t necessarily mean they’re going to match your norms.
My experience is that however painstakingly you make it clear that you want people to be honest with you or act a certain way, sometimes they just won’t. You just have to accept that people have different personalities and things they’re comfortable with, and try to love the ways they differ from you rather than resenting it.
I was about to comment with something similar to what Martín said. I think what you want is an AI that solves problems that can be fully specified with a low Kolmogorov complexity in some programming language. A crisp example of this sort of AI is AlphaProof, which gets a statement in Lean as input and has to output a series of steps that causes the prover to output “true” or “false.” You could also try to specify problems in a more general programming language like Python.
A “highly complicated mathematical concept” is certainly possible. It’s easy to construct hypothetical objects in math (or Python) that have arbitrarily large Kolmogorov complexity: for example, a list of random numbers with length 3^^^3.
Another example of a highly complicated mathematical object which might be “on par with the complexity of the ‘trees’ concept” is a neural network. In fact, if you have a neural network that looks at an image and tells you whether or not it’s a tree with very high accuracy, you might say that this neural network formally encodes (part of) the concept of a tree. It’s probably hard to do much better than a neural network if you hope to encode the “concept of a tree,” and this requires a Python program a lot longer than the program that would specify a vector or a market.
Eliezer’s post on Occam’s Razor explains this better than I could—it’s definitely worth a read.
It seems like it “wanted” to say that the blocked pronoun was “I” since it gave the example of “___ am here to help.” Then it was inadvertently redirected into saying “you” and it went along with that answer. Very interesting.
I wonder if there’s some way to apply this to measuring faithful CoT, where the model should go back and correct itself if it says something that we know is “unfaithful” to its true reasoning.
I previously agree-voted on the top-level comment and disagree-voted on your reply, but this was pretty convincing to me. I have now removed those votes.
My thinking was that some of your takeover plan seems pretty non-obvious and novel, especially the idea of creating mirror mold, so it could conceivably be useful to an AI. It’s possible that future AI will be good at executing clearly-defined tasks but bad at coming up with good ideas for achieving long-term goals—this is sort of what we have now. (I asked ChatGPT to come up with novel takeover plans, and while I can’t say its ideas definitely wouldn’t work, they seem a bit sketchy and are definitely less actionable.)
But yes, probably by the time the AI can come up with and accurately evaluate the ideas needed to create its own mirror mold from scratch, the idea of making mirror mold in the first place would be easy to come by.
I am not really knowledgeable about it either and have never been hypnotized or attempted to hypnotize anyone. I did read one book on it a while ago—Reality is Plastic—which I remember feeling pretty informative.
Having said all that, I get the impression this description of hypnosis is a bit understated. Social trust stuff can help, but I’m highly confident that people can get hypnotized from as little as an online audio file. Essentially, if you really truly believe hypnosis will work on you, it will work, and everything else is just about setting the conditions to make you believe it really hard. Self-hypnosis is a thing, if you believe in it.
Hypnosis can actually be fairly risky—certain hypnotic suggestions can be self-reinforcing/addictive, so you want to avoid that stuff. In general, anything that can overwrite your fundamental personality is quite risky. For these reasons, I’m not sure I’d endorse hypnosis becoming a widespread tool in the rationalist toolkit.
Well, the statement you quoted doesn’t contradict the additional statement “This policy is more likely to apply if most details about you other than your existence are not publicly known.” Most likely, both statements are true.