I agree that to the extent there is a shoggoth, it is very different than the characters it plays, and an attempted shoggoth character would not be “the real shoggoth”. But is it even helpful to think of the shoggoth as being an intelligence with goals and values? Some people are thinking in those terms, e.g. Eliezer Yudkowsky saying that “the actual shoggoth has a motivation Z”. To what extent is the shoggoth really a mind or an intelligence, rather than being the substrate on which intelligences can emerge? And to get back to the point I was trying to make in OP, what evidence do we have that favors the shoggoth being a separate intelligence?
To rephrase: behavior is a function of the LLM and prompt (the “mask”), and with the correct LLM and prompt together we can get an intelligence which seems to have goals and values. But is it reasonable to “average over the masks” to get the “true behavior” of the LLM alone? I don’t think that’s necessarily meaningful since it would be so dependent on the weighting of the average. For instance, if there’s an LLM-based superintelligence that becomes a benevolent sovereign (respectively, paperclips the world) if the first word of its prompt has an even (respectively, odd) number of letters, what would be the shoggoth there?
So the shoggoth here is the actual process that gets low loss on token prediction. Part of the reason that it is a shoggoth is that it is not the thing that does the talking. Seems like we are onboard here.
The shoggoth is not an average over masks. If you want to see the shoggoth, stop looking at the text on the screen and look at the input token sequence and then the logits that the model spits out. That’s what I mean by the behavior of the shoggoth.
On the question of whether it’s really a mind, I’m not sure how to tell. I know it gets really low loss on this really weird and hard task and does it better than I do. I also know the task is fairly universal in the sense that we could represent just about any task in terms of the task it is good at. Is that an intelligence? Idk, maybe not? I’m not worried about current LLMs doing planning. It’s more like I have a human connectnome and I can do one forward pass through it with an input set of nerve activations. Is that an intelligence? Idk, maybe not?
I think I don’t understand your last question. The shoggoth would be the thing that gets low loss on this really weird task where you predict sequences of characters from an alphabet with 50,000 characters that have really weird inscrutable dependencies between them. Maybe it’s not intelligent, but if it’s really good at the task, since the task is fairly universal, I expect it to be really intelligent. I further expect it to have some sort of goals that are in some way related to predicting these tokens well.
On the question of whether it’s really a mind, I’m not sure how to tell. I know it gets really low loss on this really weird and hard task and does it better than I do. I also know the task is fairly universal in the sense that we could represent just about any task in terms of the task it is good at. Is that an intelligence? Idk, maybe not? I’m not worried about current LLMs doing planning. It’s more like I have a human connectnome and I can do one forward pass through it with an input set of nerve activations. Is that an intelligence? Idk, maybe not?
I think we’re largely on the same page here because I’m also unsure of how to tell! I think I’m asking for someone to say what it means for the model itself to have a goal separate from the masks it is wearing, and show evidence that this is the case (rather than the model “fully being the mask”). For example, one could imagine an AI with the secret goal “maximize paperclips” which would pretend to be other characters but always be nudging the world towards paperclipping, or human actors who perform in a way supporting the goal “make my real self become famous/well-paid/etc” regardless of which character they play. Can someone show evidence for the LLMs having a “real self” or a “real goal” that they work towards across all the characters they play?
I think I don’t understand your last question.
I suppose I’m trying to make a hypothetical AI that would frustrate any sense of “real self” and therefore disprove the claim “all LLMs have a coherent goal that is consistent across characters”. In this case, the AI could play the “benevolent sovereign” character or the “paperclip maximizer” character, so if one claimed there was a coherent underlying goal I think the best you could say about it is “it is trying to either be a benevolent sovereign or maximize paperclips”. But if your underlying goal can cross such a wide range of behaviors it is practically meaningless! (I suppose these two characters do share some goals like gaining power, but we could always add more modes to the AI like “immediately delete itself” which shrinks the intersection of all the characters’ goals.)
I suppose I’m trying to make a hypothetical AI that would frustrate any sense of “real self” and therefore disprove the claim “all LLMs have a coherent goal that is consistent across characters”. In this case, the AI could play the “benevolent sovereign” character or the “paperclip maximizer” character, so if one claimed there was a coherent underlying goal I think the best you could say about it is “it is trying to either be a benevolent sovereign or maximize paperclips”. But if your underlying goal can cross such a wide range of behaviors it is practically meaningless! (I suppose these two characters do share some goals like gaining power, but we could always add more modes to the AI like “immediately delete itself” which shrinks the intersection of all the characters’ goals.)
Oh I see! Yeah I think we’re thinking about this really differently. Imagine there was an agent whose goal was to make little balls move according to some really diverse and universal laws of physics, for the sake of simplicity let’s imagine newtonian mechanics. So ok, there’s this agent that loves making these balls act as if they follow this physics. (Maybe they’re fake balls in a simulated 3d world, doesn’t matter as long as they don’t have to follow the physics. They only follow the physics because the agent makes them, otherwise they would do some other thing.)
Now one day we notice that we can arrange these balls in a starting condition where they emulate an agent that has the goal of taking over ball world. Another day we notice that by just barely tweaking the start up we can make these balls simulate an agent that wants one pint of chocolate ice cream and nothing else. So ok, does this system really have on coherent goal? Well the two systems that the balls could simulate are really different, but the underlying intelligence making the balls act according to the physics has one coherent goal: make the balls act according to the physics.
The underlying LLM has something like a goal, it is probably something like “predict the next token as well as possible” although definitely not actually that because of inner outer alignment stuff. Maybe current LLMs just aren’t mind like enough to decompose into goals and beliefs, that’s actually what I think, but some program that you found with sgd to minimize surprise on tokens totally would be mind like enough, and its goal would be some sort of thing that you find when you sgd to find programs that minimize surprise on token prediction, and idk, that could be like pretty much anything. But if you then made an agent by feeding this super LLM a prompt that sets it up to simulate an agent, well that agent might have some totally different goal, and it’s gonna be totally unrelated to the goals of the underlying LLMs that does the token prediction in which the other agent lives.
I think we are pretty much on the same page! Thanks for the example of the ball-moving AI, that was helpful. I think I only have two things to add:
Reward is not the optimization target, and in particular just because an LLM was trained by changing it to predict the next token better, doesn’t mean the LLM will pursue that as a terminal goal. During operation an LLM is completely divorced from the training-time reward function, it just does the calculations and reads out the logits. This differs from a proper “goal” because we don’t need to worry about the LLM trying to wirehead by feeding itself easy predictions. In contrast, if we call up
To the extent we do say the LLM’s goal is next token prediction, that goal maps very unclearly onto human-relevant questions such as “is the AI safe?”. Next-token prediction contains multitudes, and in OP I wanted to push people towards “the LLM by itself can’t be divorced from how it’s prompted”.
I think when some people talk about a model “having a goal” they have in mind something purely behavioral. So when they talk about there being something in GPT that “has a goal of predicting the next token”, they mean it in this purely behavioral way. Like that there are some circuits in the network whose behavior has the effect of predicting the next token well, but whose behavior is not motivated by / steering on the basis of trying to predict the next token well.
But when I (and possibly you as well?) talk about a model “having a goal” I mean something much more specific and mechanistic: a goal is a certain kind of internal representation that the model maintains, such that it makes decisions downstream of comparisons between that representation and its perception. That’s a very different thing! To claim that a model has such a goal is to make a substantive claim about its internal structure and how its cognition generalizes!
When people talk about the shoggoth, it sure sounds like they are making claims that there is in fact an agent behind the mask, an agent that has goals. But maybe not? Like, when Ronny talked of the shoggoth having a goal, I assumed he was making the latter, stronger claim about the model having hidden goal-directed cognitive gears, but maybe he was making the former, weaker claim about how we can describe the model’s behaviors?
I appreciate the clarification, and I’ll try to keep that distinction in mind going forward! To rephrase my claim in this language, I’d say that an LLM as a whole does not have a behavioral goal except for “predict the next token”, which is not a sufficiently descriptive as a behavioral goal to answer a lot of questions AI researchers care about (like “is the AI safe?”). In contrast, the simulacra the model produces can be much better described by more precise behavioral goals. For instance, one might say ChatGPT (with the hidden prompt we aren’t shown) has a behavioral goal of being a helpful assistant, or an LLM roleplaying as a paperclip maximizer has the behavioral goal of producing a lot of paperclips. But an LLM as a whole could contain simulacra that have all those behavioral goals and many more, and because of that diversity they can’t be well-described by any behavioral goal more precise than “predict the next token”.
Yeah I’m totally with you that it definitely isn’t actually next token prediction, it’s some totally other goal drawn from the dist of goals you get when you sgd for minimizing next token prediction surprise.
The shoggoth is not an average over masks. If you want to see the shoggoth, stop looking at the text on the screen and look at the input token sequence and then the logits that the model spits out. That’s what I mean by the behavior of the shoggoth.
We can definitely implement a probability distribution over text as a mixture of text generating agents. I doubt that an LLM is well understood as such in all respects, but thinking of a language model as a mixture of generators is not necessarily a type error.
The logits and the text on the screen cooperate to implement the LLM’s cognition. Its outputs are generated by an iterated process of modelling completions, sampling them, then feeding the sampled completions back back to the model.
I agree that to the extent there is a shoggoth, it is very different than the characters it plays, and an attempted shoggoth character would not be “the real shoggoth”. But is it even helpful to think of the shoggoth as being an intelligence with goals and values? Some people are thinking in those terms, e.g. Eliezer Yudkowsky saying that “the actual shoggoth has a motivation Z”. To what extent is the shoggoth really a mind or an intelligence, rather than being the substrate on which intelligences can emerge? And to get back to the point I was trying to make in OP, what evidence do we have that favors the shoggoth being a separate intelligence?
To rephrase: behavior is a function of the LLM and prompt (the “mask”), and with the correct LLM and prompt together we can get an intelligence which seems to have goals and values. But is it reasonable to “average over the masks” to get the “true behavior” of the LLM alone? I don’t think that’s necessarily meaningful since it would be so dependent on the weighting of the average. For instance, if there’s an LLM-based superintelligence that becomes a benevolent sovereign (respectively, paperclips the world) if the first word of its prompt has an even (respectively, odd) number of letters, what would be the shoggoth there?
So the shoggoth here is the actual process that gets low loss on token prediction. Part of the reason that it is a shoggoth is that it is not the thing that does the talking. Seems like we are onboard here.
The shoggoth is not an average over masks. If you want to see the shoggoth, stop looking at the text on the screen and look at the input token sequence and then the logits that the model spits out. That’s what I mean by the behavior of the shoggoth.
On the question of whether it’s really a mind, I’m not sure how to tell. I know it gets really low loss on this really weird and hard task and does it better than I do. I also know the task is fairly universal in the sense that we could represent just about any task in terms of the task it is good at. Is that an intelligence? Idk, maybe not? I’m not worried about current LLMs doing planning. It’s more like I have a human connectnome and I can do one forward pass through it with an input set of nerve activations. Is that an intelligence? Idk, maybe not?
I think I don’t understand your last question. The shoggoth would be the thing that gets low loss on this really weird task where you predict sequences of characters from an alphabet with 50,000 characters that have really weird inscrutable dependencies between them. Maybe it’s not intelligent, but if it’s really good at the task, since the task is fairly universal, I expect it to be really intelligent. I further expect it to have some sort of goals that are in some way related to predicting these tokens well.
I think we’re largely on the same page here because I’m also unsure of how to tell! I think I’m asking for someone to say what it means for the model itself to have a goal separate from the masks it is wearing, and show evidence that this is the case (rather than the model “fully being the mask”). For example, one could imagine an AI with the secret goal “maximize paperclips” which would pretend to be other characters but always be nudging the world towards paperclipping, or human actors who perform in a way supporting the goal “make my real self become famous/well-paid/etc” regardless of which character they play. Can someone show evidence for the LLMs having a “real self” or a “real goal” that they work towards across all the characters they play?
I suppose I’m trying to make a hypothetical AI that would frustrate any sense of “real self” and therefore disprove the claim “all LLMs have a coherent goal that is consistent across characters”. In this case, the AI could play the “benevolent sovereign” character or the “paperclip maximizer” character, so if one claimed there was a coherent underlying goal I think the best you could say about it is “it is trying to either be a benevolent sovereign or maximize paperclips”. But if your underlying goal can cross such a wide range of behaviors it is practically meaningless! (I suppose these two characters do share some goals like gaining power, but we could always add more modes to the AI like “immediately delete itself” which shrinks the intersection of all the characters’ goals.)
Oh I see! Yeah I think we’re thinking about this really differently. Imagine there was an agent whose goal was to make little balls move according to some really diverse and universal laws of physics, for the sake of simplicity let’s imagine newtonian mechanics. So ok, there’s this agent that loves making these balls act as if they follow this physics. (Maybe they’re fake balls in a simulated 3d world, doesn’t matter as long as they don’t have to follow the physics. They only follow the physics because the agent makes them, otherwise they would do some other thing.)
Now one day we notice that we can arrange these balls in a starting condition where they emulate an agent that has the goal of taking over ball world. Another day we notice that by just barely tweaking the start up we can make these balls simulate an agent that wants one pint of chocolate ice cream and nothing else. So ok, does this system really have on coherent goal? Well the two systems that the balls could simulate are really different, but the underlying intelligence making the balls act according to the physics has one coherent goal: make the balls act according to the physics.
The underlying LLM has something like a goal, it is probably something like “predict the next token as well as possible” although definitely not actually that because of inner outer alignment stuff. Maybe current LLMs just aren’t mind like enough to decompose into goals and beliefs, that’s actually what I think, but some program that you found with sgd to minimize surprise on tokens totally would be mind like enough, and its goal would be some sort of thing that you find when you sgd to find programs that minimize surprise on token prediction, and idk, that could be like pretty much anything. But if you then made an agent by feeding this super LLM a prompt that sets it up to simulate an agent, well that agent might have some totally different goal, and it’s gonna be totally unrelated to the goals of the underlying LLMs that does the token prediction in which the other agent lives.
I think we are pretty much on the same page! Thanks for the example of the ball-moving AI, that was helpful. I think I only have two things to add:
Reward is not the optimization target, and in particular just because an LLM was trained by changing it to predict the next token better, doesn’t mean the LLM will pursue that as a terminal goal. During operation an LLM is completely divorced from the training-time reward function, it just does the calculations and reads out the logits. This differs from a proper “goal” because we don’t need to worry about the LLM trying to wirehead by feeding itself easy predictions. In contrast, if we call up
To the extent we do say the LLM’s goal is next token prediction, that goal maps very unclearly onto human-relevant questions such as “is the AI safe?”. Next-token prediction contains multitudes, and in OP I wanted to push people towards “the LLM by itself can’t be divorced from how it’s prompted”.
Possibly relevant aside:
There may be some confusion here about behavioral vs. mechanistic claims.
I think when some people talk about a model “having a goal” they have in mind something purely behavioral. So when they talk about there being something in GPT that “has a goal of predicting the next token”, they mean it in this purely behavioral way. Like that there are some circuits in the network whose behavior has the effect of predicting the next token well, but whose behavior is not motivated by / steering on the basis of trying to predict the next token well.
But when I (and possibly you as well?) talk about a model “having a goal” I mean something much more specific and mechanistic: a goal is a certain kind of internal representation that the model maintains, such that it makes decisions downstream of comparisons between that representation and its perception. That’s a very different thing! To claim that a model has such a goal is to make a substantive claim about its internal structure and how its cognition generalizes!
When people talk about the shoggoth, it sure sounds like they are making claims that there is in fact an agent behind the mask, an agent that has goals. But maybe not? Like, when Ronny talked of the shoggoth having a goal, I assumed he was making the latter, stronger claim about the model having hidden goal-directed cognitive gears, but maybe he was making the former, weaker claim about how we can describe the model’s behaviors?
I appreciate the clarification, and I’ll try to keep that distinction in mind going forward! To rephrase my claim in this language, I’d say that an LLM as a whole does not have a behavioral goal except for “predict the next token”, which is not a sufficiently descriptive as a behavioral goal to answer a lot of questions AI researchers care about (like “is the AI safe?”). In contrast, the simulacra the model produces can be much better described by more precise behavioral goals. For instance, one might say ChatGPT (with the hidden prompt we aren’t shown) has a behavioral goal of being a helpful assistant, or an LLM roleplaying as a paperclip maximizer has the behavioral goal of producing a lot of paperclips. But an LLM as a whole could contain simulacra that have all those behavioral goals and many more, and because of that diversity they can’t be well-described by any behavioral goal more precise than “predict the next token”.
Yeah I’m totally with you that it definitely isn’t actually next token prediction, it’s some totally other goal drawn from the dist of goals you get when you sgd for minimizing next token prediction surprise.
We can definitely implement a probability distribution over text as a mixture of text generating agents. I doubt that an LLM is well understood as such in all respects, but thinking of a language model as a mixture of generators is not necessarily a type error.
The logits and the text on the screen cooperate to implement the LLM’s cognition. Its outputs are generated by an iterated process of modelling completions, sampling them, then feeding the sampled completions back back to the model.