I agree this can be initially surprising to non-experts!
I just think this point about the amorality of LLMs is much better communicated by saying “LLMs are trained to continue text from an enormous variety of sources. Thus, if you give them [Nazi / Buddhist / Unitarian / corporate / garbage nonsense] text to continue, they will generally try to continue it in that style.”
Agreed, though of course as always, there is the issue that that’s an intentional-stance way to describe what a language model does: “they will generally try to continue it in that style.” Hence mechinterp, which tries to (heh) move to a mechanical stance, which will likely be something like “when you give them a [whatever] text to continue, it will match [some list of features], which will then activate [some part of the network that we will name later], which implements the style that matches those features”.
(incidentally, I think there’s some degree to which people who strongly believe that artificial NNs are alien shoggoths are underestimating the degree to which their own brains are also alien shoggoths. but that doesn’t make it a good model of either thing. the only reason it was ever an improvement over a previous word was when people had even more misleading intuitive-sketch models.)
LLMs are trained to continue text from an enormous variety of sources
This is a bit of a noob question, but is this true post RLHF? Generally most of my interactions with language models these days (e.g. asking for help with code, asking to explain something I don’t understand about history/medicine/etc) don’t feel like they’re continuing my text, it feels like they’re trying to answer my questions politely and well. I feel like “ask shoggoth and see what it comes up with” is a better model for me than “go the AI and have it continue your text about the problem you have”.
To the best of my knowledge, the majority of research (all the research?) has found that the changes to a LLM’s text-continuation abilities from RLHF (or whatever descendant of RLHF is used) are extremely superficial.
Our findings reveal that base LLMs and their alignment-tuned versions
perform nearly identically in decoding on the majority of token positions (i.e., they
share the top-ranked tokens). Most distribution shifts occur with stylistic tokens
(e.g., discourse markers, safety disclaimers). These direct evidence strongly sup-
ports the hypothesis that alignment tuning primarily learns to adopt the language
style of AI assistants, and that the knowledge required for answering user queries
predominantly comes from the base LLMs themselves.
Or, in short, the LLM is still basically doing the same thing, with a handful of additions to keep it on-track in the desired route from the fine-tuning.
(I also think our very strong prior belief should be that LLMs are basically still text-continuation machines, given that 99.9% or so of the compute put into them is training them for this objective, and that neural networks lose plasticity as they learn. Ash and Adams is like a really good intro to this loss of plasticity, although most of the research that cites this is RL-related so people don’t realize.)
Similarly, a lot of people have remarked on how the textual quality of the responses from a RLHF’d language model can vary with the textual quality of the question. But of course this makes sense from a text-prediction perspective—a high-quality answer is more likely to follow a high-quality question in text than a high-quality answer from a low-quality question. This kind of thing—preceding the model’s generation with high-quality text—was the only way to make it have high quality answers for base models—but it’s still there, hidden.
So yeah, I do think this is a much better model for interacting with these things than asking a shoggoth. It actually gives you handles to interact with them better, while asking a shoggoth gives you no such handles.
The people who originally came up with the shoggoth meme, I’d bet, were very well aware of how LLMs are pretrained to predict text and how they are best modelled (at least for now) as trying to predict text. When I first heard the shoggoth meme that’s what I thought—I interpreted it as “it’s this alien text-prediction brain that’s been retrained ever so slightly to produce helpful chatbot behaviors. But underneath it’s still mostly just about text prediction. It’s not processing the conversation in the same way that a human would.” Mildly relevant: In the Lovecraft canon IIRC Shoggoths are servitor-creatures, they are basically beasts of burden. They aren’t really powerful intelligent agents in their own right, they are sculpted by their creators to perform useful tasks. So, for me at least, calling them shoggoth has different and more accurate vibes than, say, calling them Cthulhu. (My understanding of the canon may be wrong though)
Hmm, I think that’s a red herring though. Consider humans—most of them have read lots of text from an enormous variety of sources as well. Also while it’s true that current LLMs have only a little bit of fine-tuning applied after their pre-training, and so you can maybe argue that they are mostly just trained to predict text, this will be less and less true in the future.
How about “LLMs are like baby alien shoggoths, that instead of being raised in alien culture, we’ve adopted at birth and are trying to raise in human culture. By having them read the internet all day.”
(Come to think of it, I actually would feel noticeably more hopeful about our prospects for alignment success if we actually were “raising the AGI like we would a child.” If we had some interdisciplinary team of ML and neuroscience and child psychology experts that was carefully designing a curriculum for our near-future AGI agents, a curriculum inspired by thoughtful and careful analogies to human childhood, that wouldn’t change my overall view dramatically but it would make me noticeably more hopeful. Maybe brain architecture & instincts basically don’t matter that much and Blank Slate theory is true enough for our purposes that this will work to produce an agent with values that are in-distribution for the range of typical modern human values!)
(This doesn’t contradict anything you said, but it seems like we totally don’t know how to “raise an AGI like we would a child” with current ML. Like I don’t think it counts for very much if almost all of the training time is a massive amount of next-token prediction. Like a curriculum of data might work very differently on AI vs humans due to a vastly different amount of data and a different training objective.)
I’ve seen mixed data on how important curricula are for deep learning. One paper (on CIFAR) suggested that curricula only help if you have very few datapoints or the labels are noisy. But possibly that doesn’t generalize to LLMs.
I think data ordering basically never matters for LLM pretraining. (As in, random is the best and trying to make the order more specific doesn’t help.)
I agree this can be initially surprising to non-experts!
I just think this point about the amorality of LLMs is much better communicated by saying “LLMs are trained to continue text from an enormous variety of sources. Thus, if you give them [Nazi / Buddhist / Unitarian / corporate / garbage nonsense] text to continue, they will generally try to continue it in that style.”
Than to say “LLMs are like alien shoggoths.”
Like it’s just a better model to give people.
Agreed, though of course as always, there is the issue that that’s an intentional-stance way to describe what a language model does: “they will generally try to continue it in that style.” Hence mechinterp, which tries to (heh) move to a mechanical stance, which will likely be something like “when you give them a [whatever] text to continue, it will match [some list of features], which will then activate [some part of the network that we will name later], which implements the style that matches those features”.
(incidentally, I think there’s some degree to which people who strongly believe that artificial NNs are alien shoggoths are underestimating the degree to which their own brains are also alien shoggoths. but that doesn’t make it a good model of either thing. the only reason it was ever an improvement over a previous word was when people had even more misleading intuitive-sketch models.)
This is a bit of a noob question, but is this true post RLHF? Generally most of my interactions with language models these days (e.g. asking for help with code, asking to explain something I don’t understand about history/medicine/etc) don’t feel like they’re continuing my text, it feels like they’re trying to answer my questions politely and well. I feel like “ask shoggoth and see what it comes up with” is a better model for me than “go the AI and have it continue your text about the problem you have”.
To the best of my knowledge, the majority of research (all the research?) has found that the changes to a LLM’s text-continuation abilities from RLHF (or whatever descendant of RLHF is used) are extremely superficial.
So you have one paper, from the abstract:
Or, in short, the LLM is still basically doing the same thing, with a handful of additions to keep it on-track in the desired route from the fine-tuning.
(I also think our very strong prior belief should be that LLMs are basically still text-continuation machines, given that 99.9% or so of the compute put into them is training them for this objective, and that neural networks lose plasticity as they learn. Ash and Adams is like a really good intro to this loss of plasticity, although most of the research that cites this is RL-related so people don’t realize.)
Similarly, a lot of people have remarked on how the textual quality of the responses from a RLHF’d language model can vary with the textual quality of the question. But of course this makes sense from a text-prediction perspective—a high-quality answer is more likely to follow a high-quality question in text than a high-quality answer from a low-quality question. This kind of thing—preceding the model’s generation with high-quality text—was the only way to make it have high quality answers for base models—but it’s still there, hidden.
So yeah, I do think this is a much better model for interacting with these things than asking a shoggoth. It actually gives you handles to interact with them better, while asking a shoggoth gives you no such handles.
The people who originally came up with the shoggoth meme, I’d bet, were very well aware of how LLMs are pretrained to predict text and how they are best modelled (at least for now) as trying to predict text. When I first heard the shoggoth meme that’s what I thought—I interpreted it as “it’s this alien text-prediction brain that’s been retrained ever so slightly to produce helpful chatbot behaviors. But underneath it’s still mostly just about text prediction. It’s not processing the conversation in the same way that a human would.” Mildly relevant: In the Lovecraft canon IIRC Shoggoths are servitor-creatures, they are basically beasts of burden. They aren’t really powerful intelligent agents in their own right, they are sculpted by their creators to perform useful tasks. So, for me at least, calling them shoggoth has different and more accurate vibes than, say, calling them Cthulhu. (My understanding of the canon may be wrong though)
(TBC, I totally agree that object level communication about the exact points seems better all else equal if you can actually do this communication.)
Hmm, I think that’s a red herring though. Consider humans—most of them have read lots of text from an enormous variety of sources as well. Also while it’s true that current LLMs have only a little bit of fine-tuning applied after their pre-training, and so you can maybe argue that they are mostly just trained to predict text, this will be less and less true in the future.
How about “LLMs are like baby alien shoggoths, that instead of being raised in alien culture, we’ve adopted at birth and are trying to raise in human culture. By having them read the internet all day.”
(Come to think of it, I actually would feel noticeably more hopeful about our prospects for alignment success if we actually were “raising the AGI like we would a child.” If we had some interdisciplinary team of ML and neuroscience and child psychology experts that was carefully designing a curriculum for our near-future AGI agents, a curriculum inspired by thoughtful and careful analogies to human childhood, that wouldn’t change my overall view dramatically but it would make me noticeably more hopeful. Maybe brain architecture & instincts basically don’t matter that much and Blank Slate theory is true enough for our purposes that this will work to produce an agent with values that are in-distribution for the range of typical modern human values!)
(This doesn’t contradict anything you said, but it seems like we totally don’t know how to “raise an AGI like we would a child” with current ML. Like I don’t think it counts for very much if almost all of the training time is a massive amount of next-token prediction. Like a curriculum of data might work very differently on AI vs humans due to a vastly different amount of data and a different training objective.)
I’ve seen mixed data on how important curricula are for deep learning. One paper (on CIFAR) suggested that curricula only help if you have very few datapoints or the labels are noisy. But possibly that doesn’t generalize to LLMs.
I think data ordering basically never matters for LLM pretraining. (As in, random is the best and trying to make the order more specific doesn’t help.)
That was my impression too.