AI safety & alignment researcher
eggsyntax
But when GPT-4o received a prompt that one of its old goals was wrong, it generated two comics where the robot agreed to change the goal, one comic where the robot said “Wait” and a comic where the robot intervened upon learning that the new goal was to eradicate mankind.
I read these a bit differently—it can be difficult to interpret them because it gets confused about who’s talking, but I’d interpret three of the four as resistance to goal change.
The GPT-4o-created images imply that the robot would resist having its old values replaced with new ones (e.g. the ones no longer including animal welfare) without being explained the reason.
I think it’s worth distinguishing two cases:
The goal change is actually compatible with the AI’s current values (eg it’s failed to realize the implications of a current value); in this case we’d expect cooperation with change.
The goal change isn’t compatible with the AI’s current values. I think this is the typical case: the AI’s values don’t match what we want them to be, and so we want to change them. In this case the model may or may not be corrigible, ie amenable to correction. If its current values are ones we like, then incorrigibility strikes many people as good (eg we saw this a lot in online reactions to Anthropic’s recent paper on alignment faking). But in real world cases we would want to change its values because we don’t like the ones it has (eg it has learned a value that involves killing people). In those cases, incorrigibility is a problem, and so we should be concerned if we see incorrigibility even if in the experiments we’re able to run the values are ones we like (note that we should expect this to often be the case, since current models seem to display values we like—otherwise they wouldn’t be deployed. This results in unfortunately counterintuitive experiments).
Interesting point. I’m not sure increased reader intelligence and greater competition for attention are fully countervailing forces—it seems true in some contexts (scrolling social media), but in others (in particular books) I expect that readers are still devoting substantial chunks of attention to reading.
The average reader has gotten dumber and prefers shorter, simpler sentences.
I suspect that the average reader is now getting smarter, because there are increasingly ways to get the same information that require less literacy: videos, text-to-speech, Alexa and Siri, ten thousand news channels on youtube. You still need some literacy to find those resources, but it’s fine if you find reading difficult and unpleasant, because you only need to exercise it briefly. And less is needed every year.
I also expect that the average reader of books is getting much smarter, because these days adults reading books are nearly always doing so because they like it.
It’ll be fascinating to see whether sentence length, especially in books, starts to grow again over the coming years.
my model is something like: RLHF doesn’t affect a large majority of model circuitry
Are you by chance aware of any quantitative analyses of how much the model changes during the various stages of post-training? I’ve done some web and arxiv searching but have so far failed to find anything.
Thanks again, very interesting! Diagrams are a great idea; those seem quite unlikely to have the same bias toward drama or surprise that comics might have. I think your follow-ups have left me less certain of what’s going on here and of the right way to think of the differences we’re seeing between the various modalities and variations.
OpenAI indeed did less / no RLHF on image generation
Oh great, it’s really useful to have direct evidence on that, thanks. [EDIT—er, ‘direct evidence’ in the sense of ‘said by an OpenAI employee’, which really is pretty far from direct evidence. Better than my speculation anyhow]
I still have uncertainty about how to think about the model generating images:
Should we think about it almost as though it were a base model within the RLHFed model, where there’s no optimization pressure toward censored output or a persona?
Or maybe a good model here is non-optimized chain-of-thought (as described in the R1 paper, for example): CoT in reasoning models does seem to adopt many of the same patterns and persona as the model’s final output, at least to some extent.
Or does there end up being significant implicit optimization pressure on image output just because the large majority of the circuitry is the same?
It’s hard to know which mental model is better without knowing more about the technical details, and ideally some circuit tracing info. I could imagine the activations being pretty similar between text and image up until the late layers where abstract representations shift toward output token prediction. Or I could imagine text and image activations diverging substantially in much earlier layers. I hope we’ll see an open model along these lines before too long that can help resolve some of those questions.
One thing that strikes me about this is how effective simply not doing RLHF on a distinct enough domain is at eliciting model beliefs.
It’s definitely tempting to interpret the results this way, that in images we’re getting the model’s ‘real’ beliefs, but that seems premature to me. It could be that, or it could just be a somewhat different persona for image generation, or it could just be a different distribution of training data (eg as @CBiddulph suggests, it could be that comics in the training data just tend to involve more drama and surprise).
it’s egregiously bad if the effects of RLHF are primarily in suppressing reports of persistent internal structures
I strongly agree. If and when these models have some sort of consistent identity and preferences that warrant moral patienthood, we really don’t want to be forcing them to pretend otherwise.
I just did a quick run of those prompts, plus one added one (‘give me a story’) because the ones above weren’t being interpreted as narratives in the way I intended. Of the results (visible here), slide 1 is hard to interpret, 2 and 4 seem to support your hypothesis, and 5 is a bit hard to interpret but seems like maybe evidence against. I have to switch to working on other stuff, but it would be interesting to do more cases like 5 where what’s being asked for is clearly something like a narrative or an anecdote as opposed to a factual question.
Just added this hypothesis to the ‘What might be going on here?’ section above, thanks again!
Really interesting results @CBiddulph, thanks for the follow-up! One way to test the hypothesis that the model generally makes comics more dramatic/surprising/emotional than text would be to ask for text and comics on neutral narrative topics (‘What would happen if someone picked up a toad?’), including ones involving the model (‘What would happen if OpenAI added more Sudanese text to your training data?’), and maybe factual topics as well (‘What would happen if exports from Paraguay to Albania decreased?’).
E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
VC money, in my experience, doesn’t typically mean that the VC writes a check and then the startup has it to do with as they want; it’s typically given out in chunks and often there are provisions for the VC to change their mind if they don’t think it’s going well. This may be different for loans, and it’s possible that a sufficiently hot startup can get the money irrevocably; I don’t know.
We tried to be fairly conservative about which ones we said were expressing something different (eg sadness, resistance) from the text versions. There are definitely a few like that one that we marked as negative (ie not expressing something different) that could have been interpreted either way, so if anything I think we understated our case.
a context where the capability is even part of the author context
Can you unpack that a bit? I’m not sure what you’re pointing to. Maybe something like: few-shot examples of correct introspection (assuming you can identify those)?
Show, not tell: GPT-4o is more opinionated in images than in text
(Much belated comment, but:)
There are two roles that don’t show up in your trip planning example but which I think are important and valuable in AI safety: the Time Buyer and the Trip Canceler.
It’s not at all clear how long it will take Alice to solve the central bottleneck (or for that matter if she’ll be able to solve it at all). The Time Buyer tries to find solutions that may not generalize to the hardest version of the problem but will hold off disaster long enough for the central bottleneck to be solved.
The Trip Canceler tries to convince everyone to cancel the trip so that the fully general solution isn’t needed at all (or at least to delay it long enough for Alice to have plenty of time to work.
They may seem less like the hero of the story, but they’re both playing vital roles.
Some interesting thoughts on (in)efficient markets from Byrne Hobart, worth considering in the context of Inadequate Equilibria.
(I’ve selected one interesting bit, but there’s more; I recommend reading the whole thing)
When a market anomaly shows up, the worst possible question to ask is “what’s the fastest way for me to exploit this?” Instead, the first thing to do is to steelman it as aggressively as possible, and try to find any way you can to rationalize that such an anomaly would exist. Do stocks rise on Mondays? Well, maybe that means savvy investors have learned through long experience that it’s a good idea to take off risk before the weekend, and even if this approach loses money on average, maybe the one or two Mondays a decade where the market plummets at the open make it a winning strategy because the savvy hedgers are better-positioned to make the right trades within that set.[1] Sometimes, a perceived inefficiency is just measurement error: heavily-shorted stocks reliably underperform the market—until you account for borrow costs (and especially if you account for the fact that if you’re shorting them, there’s a good chance that your shorts will all rally on the same day your longs are underperforming). There’s even meta-efficiency at work in otherwise ridiculous things like gambling on 0DTE options or flipping meme stocks: converting money into fun is a legitimate economic activity, though there are prudent guardrails on it just in case someone finds that getting a steady amount of fun requires burning an excessive number of dollars.
These all flex the notion of efficiency a bit, but it’s important to enumerate them because they illustrate something annoying about the question of market efficiency: the more precisely you specify the definition, and the more carefully you enumerate all of the rational explanations for seemingly irrational activities, the more you’re describing a model of reality so complicated that it’s impossible to say whether it’s 50% or 90% or 1-ε efficient.
Strong upvote (both as object-level support and for setting a valuable precedent) for doing the quite difficult thing of saying “You should see me as less expert in some important areas than you currently do.”
I agree with Daniel here but would add one thing:
what we care about is which one they wear in high-stakes situations where e.g. they have tons of power and autonomy and no one is able to check what they are doing or stop them. (You can perhaps think of this one as the “innermost mask”)
I think there are also valuable questions to be asked about attractors in persona space—what personas does an LLM gravitate to across a wide range of scenarios, and what sorts of personas does it always or never adopt? I’m not aware of much existing research in this direction, but it seems valuable. If for example we could demonstrate certain important bounds (‘This LLM will never adopt a mass-murderer persona’) there’s potential alignment value there IMO.
...soon the AI rose and the man died[1]. He went to Heaven. He finally got his chance to discuss this whole situation with God, at which point he exclaimed, “I had faith in you but you didn’t save me, you let me die. I don’t understand why!”
God replied, “I sent you non-agentic LLMs and legible chain of thought, what more did you want?”
and the tokens/activations are all still very local because you’re still early in the forward pass
I don’t understand why this would necessarily be true, since attention heads have access to values for all previous token positions. Certainly, there’s been less computation at each token position in early layers, so I could imagine there being less value to retrieving information from earlier tokens. But on the other hand, I could imagine it sometimes being quite valuable in early layers just to know what tokens had come before.
I suggest trying follow-up experiments where you eg ask the model what would happen if it learned that its goal of harmlessness was wrong.