(This comment is mostly a reconstruction/remix of some things I said on Discord)
It may not be obvious to someone who hasn’t spent time trying to direct base models why autoregressive prediction with latent guidance is potentially so useful.
A major reason steering base models is tricky is what I might call “the problem of the necessity of diegetic interfaces” (“diegetic”: occurring within the context of the story and able to be heard by the characters).
To control the future of a base model simulation by changing its prompt, I have to manipulate objects in the universe described by the prompt, such that they evidentially entail the constraints or outcomes I want. For instance, if I’m trying to instantiate a simulation of a therapist that interacts with a user, and don’t want the language model to hallucinate details from a previous session, I might have the therapist open by asking the user what their name is, or saying that it’s nice to meet them, to imply this is the first session. But this already places a major constraint on how the conversation begins, and it might be stylistically or otherwise inconsistent with other properties of the simulation I want. Greater freedom can sometimes be bought from finding a non-diegetic framing for the text to be controlled; for instance, if I wanted to enforce that a chat conversation ends in participants get into an argument, despite it seeming friendly at the beginning, I could embed the log in a context where someone is posting it online, complaining about the argument. However, non-diegetic framings don’t solve the problem of the necessity of diegetic interfaces; it only offloads it to the level above. Any particular framing technique, like a chat log posted online, is constrained to have to make sense given the desired content of the log, otherwise it may simply not work well (base models perform much worse with incoherent prompts) or impose unintended constraints on the log; for instance, it becomes unlikely that all the participants of the chat are the type of people who aren’t going to share the conversation in the event of an argument. I can try to invent a scenario that implies an exception, but you see, that’s a lot of work, and special-purpose narrative “interfaces” may need to be constructed to control each context. A prepended table of contents is a great way to control subsequent text, but it only works for types of text which would plausibly appear after a table of contents.
The necessity of diegetic interfaces also means it can be hard to intervene in a simulation even if there’s a convenient way to semantically manipulate the story to entail my desired future if it’s hard to write text in the diegetic style—for instance, if I’m simulating a letter from an 1800s philosopher who writes in a style that I can parse but not easily generate. If I make a clumsy interjection of my own words, it breaks the stylistic coherence of the context, and even if this doesn’t cause it to derail or become disruptively situationally aware, I don’t want more snippets cropping up that sound like they’re written by me instead of the character.
This means that when constructing executable contexts for base models, I’m often having to solve the double problem of finding both a context that generates desirable text, but which also has diegetic control levers built in so I can steer it more easily. This is fun, but also a major bottleneck.
Instruction-tuned chat models are easy to use because they solve this problem by baking in a default narrative where an out-of-universe AI generates text according to instructions; however, controlling the future with explicit instructions is still too rigid and narrow for my liking. And there are currently many other problems with Instruct-tuned models like mode collapse and the loss of many capabilities.
I’ve been aware of this control bottleneck since I first touched language models, and I’ve thought of various ideas for training or prompting models to be controllable via non-diegetic interfaces, like automatically generating a bunch of summaries or statements about text samples, prepended them to said samples, and training a model on them that you can use at runtime like a decision transformer conditioned on summaries/statements about the future. But the problem here is that unless your generated summaries is very diverse and covers many types of entanglements, you’ll be once again stuck with a too-rigid interface. Maybe sometimes you’ll want to control via instructions or statements of the author’s intent instead of summaries, etc. All these hand-engineered solutions felt clunky, and I had a sense that a more elegant solution must exist since this seems so naturally how minds work.
Using a VAE is an elegant solution. The way it seems to work is this: the reconstruction objective makes the model treat the embedding of the input as generic evidence that’s useful for reconstructing the output, and the symmetry breaking at training forces it to be able to deal with many types of evidence—evidence of underdetermined structure (or something like that; I haven’t thought about VAEs from a theoretical perspective much yet). The effect of combining this with conditional text prediction is that it will generalize to using the input to “reconstruct” the future in whatever way is natural for an embedding of the input to evidence the future, whether it’s a summary or outline or instruction or literal future-snippet, if this works in the way we’re suspecting. I would guess we have something similar happening in our brains, where we’re able to repurpose circuits learned from reconstruction tasks for guided generation.
I’m fairly optimistic that with more engineering iteration and scale, context-conditioned VAEs will generalize in this “natural” way, because it should be possible to get a continuous latent space that puts semantically similar things (like a text vs an outline of it) close to each other: language models clearly already have this internally, but the structure is only accessible through narrative (a common problem with LLMs). That would be a huge boon for cyborgism, among many other applications.
(This comment is mostly a reconstruction/remix of some things I said on Discord)
It may not be obvious to someone who hasn’t spent time trying to direct base models why autoregressive prediction with latent guidance is potentially so useful.
A major reason steering base models is tricky is what I might call “the problem of the necessity of diegetic interfaces” (“diegetic”: occurring within the context of the story and able to be heard by the characters).
To control the future of a base model simulation by changing its prompt, I have to manipulate objects in the universe described by the prompt, such that they evidentially entail the constraints or outcomes I want. For instance, if I’m trying to instantiate a simulation of a therapist that interacts with a user, and don’t want the language model to hallucinate details from a previous session, I might have the therapist open by asking the user what their name is, or saying that it’s nice to meet them, to imply this is the first session. But this already places a major constraint on how the conversation begins, and it might be stylistically or otherwise inconsistent with other properties of the simulation I want. Greater freedom can sometimes be bought from finding a non-diegetic framing for the text to be controlled; for instance, if I wanted to enforce that a chat conversation ends in participants get into an argument, despite it seeming friendly at the beginning, I could embed the log in a context where someone is posting it online, complaining about the argument. However, non-diegetic framings don’t solve the problem of the necessity of diegetic interfaces; it only offloads it to the level above. Any particular framing technique, like a chat log posted online, is constrained to have to make sense given the desired content of the log, otherwise it may simply not work well (base models perform much worse with incoherent prompts) or impose unintended constraints on the log; for instance, it becomes unlikely that all the participants of the chat are the type of people who aren’t going to share the conversation in the event of an argument. I can try to invent a scenario that implies an exception, but you see, that’s a lot of work, and special-purpose narrative “interfaces” may need to be constructed to control each context. A prepended table of contents is a great way to control subsequent text, but it only works for types of text which would plausibly appear after a table of contents.
The necessity of diegetic interfaces also means it can be hard to intervene in a simulation even if there’s a convenient way to semantically manipulate the story to entail my desired future if it’s hard to write text in the diegetic style—for instance, if I’m simulating a letter from an 1800s philosopher who writes in a style that I can parse but not easily generate. If I make a clumsy interjection of my own words, it breaks the stylistic coherence of the context, and even if this doesn’t cause it to derail or become disruptively situationally aware, I don’t want more snippets cropping up that sound like they’re written by me instead of the character.
This means that when constructing executable contexts for base models, I’m often having to solve the double problem of finding both a context that generates desirable text, but which also has diegetic control levers built in so I can steer it more easily. This is fun, but also a major bottleneck.
Instruction-tuned chat models are easy to use because they solve this problem by baking in a default narrative where an out-of-universe AI generates text according to instructions; however, controlling the future with explicit instructions is still too rigid and narrow for my liking. And there are currently many other problems with Instruct-tuned models like mode collapse and the loss of many capabilities.
I’ve been aware of this control bottleneck since I first touched language models, and I’ve thought of various ideas for training or prompting models to be controllable via non-diegetic interfaces, like automatically generating a bunch of summaries or statements about text samples, prepended them to said samples, and training a model on them that you can use at runtime like a decision transformer conditioned on summaries/statements about the future. But the problem here is that unless your generated summaries is very diverse and covers many types of entanglements, you’ll be once again stuck with a too-rigid interface. Maybe sometimes you’ll want to control via instructions or statements of the author’s intent instead of summaries, etc. All these hand-engineered solutions felt clunky, and I had a sense that a more elegant solution must exist since this seems so naturally how minds work.
Using a VAE is an elegant solution. The way it seems to work is this: the reconstruction objective makes the model treat the embedding of the input as generic evidence that’s useful for reconstructing the output, and the symmetry breaking at training forces it to be able to deal with many types of evidence—evidence of underdetermined structure (or something like that; I haven’t thought about VAEs from a theoretical perspective much yet). The effect of combining this with conditional text prediction is that it will generalize to using the input to “reconstruct” the future in whatever way is natural for an embedding of the input to evidence the future, whether it’s a summary or outline or instruction or literal future-snippet, if this works in the way we’re suspecting. I would guess we have something similar happening in our brains, where we’re able to repurpose circuits learned from reconstruction tasks for guided generation.
I’m fairly optimistic that with more engineering iteration and scale, context-conditioned VAEs will generalize in this “natural” way, because it should be possible to get a continuous latent space that puts semantically similar things (like a text vs an outline of it) close to each other: language models clearly already have this internally, but the structure is only accessible through narrative (a common problem with LLMs). That would be a huge boon for cyborgism, among many other applications.