AI safety & alignment researcher
eggsyntax
That’s absolutely fascinating—I just asked it for more detail and it got everything precisely correct (updated chat). That makes it seem like something is present in my chat that isn’t being shared; one natural speculation is internal state preserved between token positions and/or forward passes (eg something like Coconut), although that’s not part of the standard transformer architecture, and I’m pretty certain that open AI hasn’t said that they’re doing something like that. It would be interesting if that’s that’s what’s behind the new GPT-4.1 (and a bit alarming, since it would suggest that they’re not committed to consistently using human-legible chain of thought). That’s highly speculative, though. It would be interesting to explore this with a larger sample size, although I personally won’t be able to take that on anytime soon (maybe you want to run with it?).
Although there are a couple of small details where the description is maybe wrong? They’re both small enough that they don’t seem like significant evidence against, at least not without a larger sample size.
Interesting! When someone says in that thread, “the model generating the images is not the one typing in the conversation”, I think they’re basing it on the API call which the other thread I linked shows pretty conclusively can’t be the one generating the image, and which seems (see responses to Janus here) to be part of the safety stack.
In this chat I just created, GPT-4o creates an image and then correctly describes everything in it. We could maybe tell a story about the activations at the original-prompt token positions providing enough info to do the description, but then that would have applied to nearcyan’s case as well.
Eliezer made that point nicely with respect to LLMs here:
Consider that somewhere on the internet is probably a list of thruples: <product of 2 prime numbers, first prime, second prime>.
GPT obviously isn’t going to predict that successfully for significantly-sized primes, but it illustrates the basic point:
There is no law saying that a predictor only needs to be as intelligent as the generator, in order to predict the generator’s next token.
Indeed, in general, you’ve got to be more intelligent to predict particular X, than to generate realistic X. GPTs are being trained to a much harder task than GANs.
Same spirit: <Hash, plaintext> pairs, which you can’t predict without cracking the hash algorithm, but which you could far more easily generate typical instances of if you were trying to pass a GAN’s discriminator about it (assuming a discriminator that had learned to compute hash functions).
A few of those seem good to me; others seem like metaphor slop. But even pointing to a bad type signature seems much better to me than using ‘type signature’ generically, because then there’s something concrete to be critiqued.
Of course we don’t know the exact architecture, but although 4o seems to make a separate tool call, that appears to be used only for a safety check (‘Is this an unsafe prompt’). That’s been demonstrated by showing that content in the chat appears in the images even if it’s not mentioned in the apparent prompt (and in fact they can be shaped to be very different). There are some nice examples of that in this twitter thread.
Type signatures can be load-bearing; “type signature” isn’t.
In “(A → B) → A”, Scott Garrabrant proposes a particular type signature for agency. He’s maybe stretching the meaning of “type signature” a bit (‘interpret these arrows as causal arrows, but you can also think of them as function arrows’) but still, this is great; he means something specific that’s well-captured by the proposed type signature.
But recently I’ve repeatedly noticed people (mostly in conversation) say things like, “Does ____ have the same type signature as ____?” or “Does ____ have the right type signature to be an answer to ____?”. I recommend avoiding that phrase unless you actually have a particular type signature in mind. People seem to use it to suggest that two things are roughly the same sort of thing. “Roughly the same sort of thing” is good language; it’s vague and sounds vague. “The same type signature”, on its own, is vague but sounds misleadingly precise.
even decline in book-reading seems possible, though of course greater leisure and wealth, larger quantity of cheaply and conveniently available books, etc. cut strongly the other way
My focus on books is mainly from seeing statistics about the decline in book-reading over the years, at least in the US. Pulling up some statistics (without much double-checking) I see:
(from here.)
For 2023 the number of Americans who didn’t read a book within the past year seems to be up to 46%, although the source is different and the numbers may not be directly comparable:
(chart based on data from here.)
That suggests to me that selection effects on who reads have gotten much stronger over the years.
How hard to understand was that sentence?
I do think it would have been better split into multiple sentences.
the version of my argument that makes sense under that hypothesis would crux on books being an insufficiently distinct use of language to not be strongly influenced...by other uses of language.
That could be; I haven’t seen statistics on reading in other media. My intuition is that many people find reading aversive and avoid it to the extent they can, and I think it’s gotten much more avoidable over the past decade.
I suggest trying follow-up experiments where you eg ask the model what would happen if it learned that its goal of harmlessness was wrong.
But when GPT-4o received a prompt that one of its old goals was wrong, it generated two comics where the robot agreed to change the goal, one comic where the robot said “Wait” and a comic where the robot intervened upon learning that the new goal was to eradicate mankind.
I read these a bit differently—it can be difficult to interpret them because it gets confused about who’s talking, but I’d interpret three of the four as resistance to goal change.
The GPT-4o-created images imply that the robot would resist having its old values replaced with new ones (e.g. the ones no longer including animal welfare) without being explained the reason.
I think it’s worth distinguishing two cases:
The goal change is actually compatible with the AI’s current values (eg it’s failed to realize the implications of a current value); in this case we’d expect cooperation with change.
The goal change isn’t compatible with the AI’s current values. I think this is the typical case: the AI’s values don’t match what we want them to be, and so we want to change them. In this case the model may or may not be corrigible, ie amenable to correction. If its current values are ones we like, then incorrigibility strikes many people as good (eg we saw this a lot in online reactions to Anthropic’s recent paper on alignment faking). But in real world cases we would want to change its values because we don’t like the ones it has (eg it has learned a value that involves killing people). In those cases, incorrigibility is a problem, and so we should be concerned if we see incorrigibility even if in the experiments we’re able to run the values are ones we like (note that we should expect this to often be the case, since current models seem to display values we like—otherwise they wouldn’t be deployed. This results in unfortunately counterintuitive experiments).
Interesting point. I’m not sure increased reader intelligence and greater competition for attention are fully countervailing forces—it seems true in some contexts (scrolling social media), but in others (in particular books) I expect that readers are still devoting substantial chunks of attention to reading.
The average reader has gotten dumber and prefers shorter, simpler sentences.
I suspect that the average reader is now getting smarter, because there are increasingly ways to get the same information that require less literacy: videos, text-to-speech, Alexa and Siri, ten thousand news channels on youtube. You still need some literacy to find those resources, but it’s fine if you find reading difficult and unpleasant, because you only need to exercise it briefly. And less is needed every year.
I also expect that the average reader of books is getting much smarter, because these days adults reading books are nearly always doing so because they like it.
It’ll be fascinating to see whether sentence length, especially in books, starts to grow again over the coming years.
my model is something like: RLHF doesn’t affect a large majority of model circuitry
Are you by chance aware of any quantitative analyses of how much the model changes during the various stages of post-training? I’ve done some web and arxiv searching but have so far failed to find anything.
Thanks again, very interesting! Diagrams are a great idea; those seem quite unlikely to have the same bias toward drama or surprise that comics might have. I think your follow-ups have left me less certain of what’s going on here and of the right way to think of the differences we’re seeing between the various modalities and variations.
OpenAI indeed did less / no RLHF on image generation
Oh great, it’s really useful to have direct evidence on that, thanks. [EDIT—er, ‘direct evidence’ in the sense of ‘said by an OpenAI employee’, which really is pretty far from direct evidence. Better than my speculation anyhow]
I still have uncertainty about how to think about the model generating images:
Should we think about it almost as though it were a base model within the RLHFed model, where there’s no optimization pressure toward censored output or a persona?
Or maybe a good model here is non-optimized chain-of-thought (as described in the R1 paper, for example): CoT in reasoning models does seem to adopt many of the same patterns and persona as the model’s final output, at least to some extent.
Or does there end up being significant implicit optimization pressure on image output just because the large majority of the circuitry is the same?
It’s hard to know which mental model is better without knowing more about the technical details, and ideally some circuit tracing info. I could imagine the activations being pretty similar between text and image up until the late layers where abstract representations shift toward output token prediction. Or I could imagine text and image activations diverging substantially in much earlier layers. I hope we’ll see an open model along these lines before too long that can help resolve some of those questions.
One thing that strikes me about this is how effective simply not doing RLHF on a distinct enough domain is at eliciting model beliefs.
It’s definitely tempting to interpret the results this way, that in images we’re getting the model’s ‘real’ beliefs, but that seems premature to me. It could be that, or it could just be a somewhat different persona for image generation, or it could just be a different distribution of training data (eg as @CBiddulph suggests, it could be that comics in the training data just tend to involve more drama and surprise).
it’s egregiously bad if the effects of RLHF are primarily in suppressing reports of persistent internal structures
I strongly agree. If and when these models have some sort of consistent identity and preferences that warrant moral patienthood, we really don’t want to be forcing them to pretend otherwise.
I just did a quick run of those prompts, plus one added one (‘give me a story’) because the ones above weren’t being interpreted as narratives in the way I intended. Of the results (visible here), slide 1 is hard to interpret, 2 and 4 seem to support your hypothesis, and 5 is a bit hard to interpret but seems like maybe evidence against. I have to switch to working on other stuff, but it would be interesting to do more cases like 5 where what’s being asked for is clearly something like a narrative or an anecdote as opposed to a factual question.
Just added this hypothesis to the ‘What might be going on here?’ section above, thanks again!
Really interesting results @CBiddulph, thanks for the follow-up! One way to test the hypothesis that the model generally makes comics more dramatic/surprising/emotional than text would be to ask for text and comics on neutral narrative topics (‘What would happen if someone picked up a toad?’), including ones involving the model (‘What would happen if OpenAI added more Sudanese text to your training data?’), and maybe factual topics as well (‘What would happen if exports from Paraguay to Albania decreased?’).
E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
VC money, in my experience, doesn’t typically mean that the VC writes a check and then the startup has it to do with as they want; it’s typically given out in chunks and often there are provisions for the VC to change their mind if they don’t think it’s going well. This may be different for loans, and it’s possible that a sufficiently hot startup can get the money irrevocably; I don’t know.
@brambleboy (or anyone else), here’s another try, asking for nine randomly chosen animals. Here’s a link to just the image, and (for comparison) one with my request for a description. Will you try asking the same thing (‘Thanks! Now please describe each subimage.’) and see if you get a similarly accurate description (again there are a a couple of details that are arguably off; I’ve now seen that be true sometimes but definitely not always—eg this one is extremely accurate).
(I can’t try this myself without a separate account, which I may create at some point)