In other words, they can think about the thoughts “behind” the previous words they wrote. If you doubt me on this, try asking one what its words are referring to, with reference to its previous words. Its “attention” modules are actually intentionally designed to know this sort of thing, using using key/query/value lookups that occur “behind the scenes” of the text you actually see on screen.
I don’t think that asking an LLM what its words are referring to is a convincing demonstration that there’s real introspection going on in there, as opposed to “plausible confabulation from the tokens written so far”. I think it is plausible there’s some real introspection going on, but I don’t think this is a good test of it—the sort of thing I would find much more compelling is if the LLMs could reliably succeed at tasks like
Human: Please think of a secret word, and don’t tell me what it is yet. LLM: OK! Human: What’s the parity of the alphanumeric index of the penultimate letter in the word, where A=1, B=2, etc? LLM: Odd. Human: How many of the 26 letters in the alphabet occur multiple times in the word? LLM: None of them. Human: Does the word appear commonly in two-word phrases, and if so on which side? LLM: It appears as the second word of a common two-word phrase, and as the first word of a different common two-word phrase. Human: Does the word contain any other common words as substrings? LLM: Yes; it contains two common words as substrings, and in fact is a concatenation of those two words. Human: What sort of role in speech does the word occupy? LLM: It’s a noun. Human: Does the word have any common anagrams? LLM: Nope. Human: How many letters long is the closest synonym to this word? LLM: Three. Human: OK, tell me the word. LLM: It was CARPET.
but couldn’t (even with some substantial effort at elicitation) infer hidden words from such clues without chain-of-thought when it wasn’t the one to think of them. That would suggest to me that there’s some pretty real reporting on a piece of hidden state not easily confabulated about after the fact.
I tried a similar experiment w/ Claude 3.5 Sonnet, where I asked it to come up w/ a secret word and in branching paths: 1. Asked directly for the word 2. Played 20 questions, and then guessed the word
In order to see if it does have a consistent it can refer back to.
Branch 1:
Branch 2:
Which I just thought was funny.
Asking again, telling it about the experiment and how it’s important for it to try to give consistent answers, it initially said “telescope” and then gave hints towards a paperclip.
Interesting to see when it flips it answers, though it’s a simple setup to repeatedly ask it’s answer every time.
Claude can think for himself before writing an answer (which is an obvious thing to do, so ChatGPT probably does it too).
In addition, you can significantly improve his ability to reason by letting him think more, so even if it were the case that this kind of awareness is necessary for consciousness, LLMs (or at least Claude) would already have it.
Yeah, I’m thinking about this in terms of introspection on non-token-based “neuralese” thinking behind the outputs; I agree that if you conceptualize the LLM as being the entire process that outputs each user-visible token including potentially a lot of CoT-style reasoning that the model can see but the user can’t, and think of “introspection” as “ability to reflect on the non-user-visible process generating user-visible tokens” then models can definitely attain that, but I didn’t read the original post as referring to that sort of behavior.
Yeah. The model has no information (except for the log) about its previous thoughts and it’s stateless, so it has to infer them from what it said to the user, instead of reporting them.
I don’t think that’s true—in eg the GPT-3 architecture, and in all major open-weights transformer architectures afaik, the attention mechanism is able to feed lots of information from earlier tokens and “thoughts” of the model into later tokens’ residual streams in a non-token-based way. It’s totally possible for the models to do real introspection on their thoughts (with some caveats about eg computation that occurs in the last few layers), it’s just unclear to me whether in practice they perform a lot of it in a way that gets faithfully communicated to the user.
Are you saying that after it has generated the tokens describing what the answer is, the previous thoughts persist, and it can then generate tokens describing them?
(I know that it can introspect on its thoughts during the single forward pass.)
During inference, for each token and each layer over it, the attention block computes some vectors, the data called the KV cache. For the current token, the contribution of an attention block in some layer to the residual stream is computed by looking at the entries in the KV cache of the same layer across all the preceding tokens. This won’t contribute to the KV cache entry for the current token at the same layer, it only influences the entry at the next layer, which is how all of this can run in parallel when processing input tokens and in training. The dataflow is shallow but wide.
So I would guess it should be possible to post-train an LLM to give answers like ”................… Yes” instead of “Because 7! contains both 3 and 5 as factors, which multiply to 15 Yes”, and the LLM would still be able to take advantage of CoT (for more challenging questions), because it would be following a line of reasoning written down in the KV cache lines in each layer across the preceding tokens, even if in the first layer there is always the same uninformative dot token. The tokens of the question are still explicitly there and kick off the process by determining the KV cache entries over the first dot tokens of the answer, which can then be taken into account when computing the KV cache entries over the following dot tokens (moving up a layer where the dependence on the KV cache data over the preceding dot tokens is needed), and so on.
So I would guess it should be possible to post-train an LLM to give answers like ”................… Yes” instead of “Because 7! contains both 3 and 5 as factors, which multiply to 15 Yes”, and the LLM would still be able to take advantage of CoT
This doesn’t necessarily follow—on a standard transformer architecture, this will give you more parallel computation but no more serial computation than you had before. The bit where the LLM does N layers’ worth of serial thinking to say “3” and then that “3″ token can be fed back into the start of N more layers’ worth of serial computation is not something that this strategy can replicate!
That’s relevant, but about what I expected and why I hedged with “it should be possible to post-train”, which that paper doesn’t explore. Residual stream on many tokens is working memory, N layers of “vertical” compute over one token only have one activation vector to work with, while with more filler tokens you have many activation vectors that can work on multiple things in parallel and then aggregate. If a weaker model doesn’t take advantage of this, or gets too hung up on concrete tokens to think about other things in the meantime, instead of being able to maintain multiple trains of thought simultaneously, a stronger model might[1].
Performance on large questions (such as reading comprehension) with immediate answer (no CoT) shows that N layers across many tokens and no opportunity to get deeper serial compute is sufficient for many purposes. But a question is only fully understood when it’s read completely, so some of the thinking about the answer can’t start before that. If there are no more tokens, this creates an artificial constraint on working memory for thinking about the answer, filler tokens should be useful for lifting it. Repeating the question seems to help for example (see Figure 3 and Table 5).
The paper is from Jul 2023 and not from OpenAI, so it didn’t get to play with 2e25+ FLOPs models, and a new wave of 2e26+ FLOPs models is currently imminent.
I don’t think that asking an LLM what its words are referring to is a convincing demonstration that there’s real introspection going on in there, as opposed to “plausible confabulation from the tokens written so far”. I think it is plausible there’s some real introspection going on, but I don’t think this is a good test of it—the sort of thing I would find much more compelling is if the LLMs could reliably succeed at tasks like
but couldn’t (even with some substantial effort at elicitation) infer hidden words from such clues without chain-of-thought when it wasn’t the one to think of them. That would suggest to me that there’s some pretty real reporting on a piece of hidden state not easily confabulated about after the fact.
I tried a similar experiment w/ Claude 3.5 Sonnet, where I asked it to come up w/ a secret word and in branching paths:
1. Asked directly for the word
2. Played 20 questions, and then guessed the word
In order to see if it does have a consistent it can refer back to.
Branch 1:
Branch 2:
Which I just thought was funny.
Asking again, telling it about the experiment and how it’s important for it to try to give consistent answers, it initially said “telescope” and then gave hints towards a paperclip.
Interesting to see when it flips it answers, though it’s a simple setup to repeatedly ask it’s answer every time.
Also could be confounded by temperature.
Claude can think for himself before writing an answer (which is an obvious thing to do, so ChatGPT probably does it too).
In addition, you can significantly improve his ability to reason by letting him think more, so even if it were the case that this kind of awareness is necessary for consciousness, LLMs (or at least Claude) would already have it.
Yeah, I’m thinking about this in terms of introspection on non-token-based “neuralese” thinking behind the outputs; I agree that if you conceptualize the LLM as being the entire process that outputs each user-visible token including potentially a lot of CoT-style reasoning that the model can see but the user can’t, and think of “introspection” as “ability to reflect on the non-user-visible process generating user-visible tokens” then models can definitely attain that, but I didn’t read the original post as referring to that sort of behavior.
Yeah. The model has no information (except for the log) about its previous thoughts and it’s stateless, so it has to infer them from what it said to the user, instead of reporting them.
I don’t think that’s true—in eg the GPT-3 architecture, and in all major open-weights transformer architectures afaik, the attention mechanism is able to feed lots of information from earlier tokens and “thoughts” of the model into later tokens’ residual streams in a non-token-based way. It’s totally possible for the models to do real introspection on their thoughts (with some caveats about eg computation that occurs in the last few layers), it’s just unclear to me whether in practice they perform a lot of it in a way that gets faithfully communicated to the user.
Are you saying that after it has generated the tokens describing what the answer is, the previous thoughts persist, and it can then generate tokens describing them?
(I know that it can introspect on its thoughts during the single forward pass.)
During inference, for each token and each layer over it, the attention block computes some vectors, the data called the KV cache. For the current token, the contribution of an attention block in some layer to the residual stream is computed by looking at the entries in the KV cache of the same layer across all the preceding tokens. This won’t contribute to the KV cache entry for the current token at the same layer, it only influences the entry at the next layer, which is how all of this can run in parallel when processing input tokens and in training. The dataflow is shallow but wide.
So I would guess it should be possible to post-train an LLM to give answers like ”................… Yes” instead of “Because 7! contains both 3 and 5 as factors, which multiply to 15 Yes”, and the LLM would still be able to take advantage of CoT (for more challenging questions), because it would be following a line of reasoning written down in the KV cache lines in each layer across the preceding tokens, even if in the first layer there is always the same uninformative dot token. The tokens of the question are still explicitly there and kick off the process by determining the KV cache entries over the first dot tokens of the answer, which can then be taken into account when computing the KV cache entries over the following dot tokens (moving up a layer where the dependence on the KV cache data over the preceding dot tokens is needed), and so on.
This doesn’t necessarily follow—on a standard transformer architecture, this will give you more parallel computation but no more serial computation than you had before. The bit where the LLM does N layers’ worth of serial thinking to say “3” and then that “3″ token can be fed back into the start of N more layers’ worth of serial computation is not something that this strategy can replicate!
Empirically, if you look at figure 5 in Measuring Faithfulness in Chain-of-Thought Reasoning, adding filler tokens doesn’t really seem to help models get these questions right:
That’s relevant, but about what I expected and why I hedged with “it should be possible to post-train”, which that paper doesn’t explore. Residual stream on many tokens is working memory, N layers of “vertical” compute over one token only have one activation vector to work with, while with more filler tokens you have many activation vectors that can work on multiple things in parallel and then aggregate. If a weaker model doesn’t take advantage of this, or gets too hung up on concrete tokens to think about other things in the meantime, instead of being able to maintain multiple trains of thought simultaneously, a stronger model might[1].
Performance on large questions (such as reading comprehension) with immediate answer (no CoT) shows that N layers across many tokens and no opportunity to get deeper serial compute is sufficient for many purposes. But a question is only fully understood when it’s read completely, so some of the thinking about the answer can’t start before that. If there are no more tokens, this creates an artificial constraint on working memory for thinking about the answer, filler tokens should be useful for lifting it. Repeating the question seems to help for example (see Figure 3 and Table 5).
The paper is from Jul 2023 and not from OpenAI, so it didn’t get to play with 2e25+ FLOPs models, and a new wave of 2e26+ FLOPs models is currently imminent.
Ooh.