Thanks! Yes, I think interpretability, activation-addition steering and representational engineering (I know less about this one) are all routes to GSLK alignment approaches.
Activation-addition steering isn’t my favorite route, because it’s not directly selecting selecting a goal representation in an important sense. There’s no steering subsystem with a representation goals. Humans have explicit representations of goals in two senses that LLMs lack, and this makes it potentially easier to control and understand what goals the system will pursue in a variety of situations. I wrote about this in Steering subsystems: capabilities, agency, and alignment.
As for your second question, it almost sounds like you’re describing large language models. They literally learn representations we can read. I think they have translucent thoughts in the sense that they’re highly if not perfectly interpretable. The word outputs don’t perfectly capture the underlying representations, but they capture most of it, most of the time. Redundancy checks can be applied to catch places where the underlying “thoughts” don’t match the outputs. I think. This needs more exploration.
Additional interpretability such as you suggest would be great. Steve Byrnes has talked about training additional interpetability outputs just like you suggest (I think) in his Intro to brain-like AGI alignment sequence. I think these approaches are great, but they impose a fairly high alignment tax. One thing I like about the GSLK approaches for brainlike AGI and language model agents is that they have very low alignment taxes in the sense of being computationally and conceptually easy to apply to the types of AGI I think we’re most likely to build first by default.
I’m worried that instead of complicated LMA setups with scaffolding and multiple agents, labs are more likely to push for a single tool using LM agent, which seems cheaper and simpler. I think some sort of internal steering for a given LM based on learned knowledge discovered through interpretability tools is probably the most competitive method. I get your point that the existing method in LLMs aren’t necessarily re targeting some sort of searching method, but at the same time they don’t have to be? Since there isn’t this explicit search and evaluation process in the first place, I think of it more as a nudge guiding LLM hallucinations.
I was just thinking, a really ambitious goal would be apply some sort of GSLK steering to LLAMA and see if you could get it to perform well on the LLM leaderboard, similar to how there’s models there that’s just DPO applied to LLAMA.
What I’m envisioning is a single agent, with some scaffolding of episodic memory and executive function to make it more effective. If I’m right, that would be not the simplest, but the cheapest way to AGI, since it fills some gaps in the language model’s abilities without using brute force. I wrote about this vision of language model cognitive architectures here.
I’m realizing that the distinction between a minimal language model agent and the sort of language model cognitive architecture I think will work better is a real distinction, and most people assume with you that a language model agent will just be a powerful LLM prompted over and over with something like “keep thinking about that, and take actions or get data using these APIs when it seems useful”. That system will be much less explicitly goal directed than an LMA with additonal executive function to keep it on-task and therefore goal-directed.
I intend to write a post about that distinction.
On your original question, see also Kristin’s comment and the paper she suggests. It’s work on a modification of the transformer algorithm to make more easily interpretable representations. I meant to mention it, and Roger Dearnaley’s post on it. I do find this a promising route to better interpretability. The ideal foundation model for a safe agent would be a language model that’s also trained with an algorithm that encourages interpretable representations.
That’s interesting re: LLMs as having “conceptual interpretability” by their very nature. I guess that makes sense, since some degree of conceptual interpretability naturally emerges given 1) sufficiently large and diverse training set, 2) sparsity constraints. LLMs are both—definitely #1, and #2 given regularization and some practical upper bounds on total number of parameters. And then there is your point—that LLMs are literally trained to create output we can interpret.
I wonder about representations formed by a shoggoth. For the most efficient prediction of what humans want to see, the shoggoth would seemingly form representations very similar to ours. Or would it? Would its representations be more constrained by and therefore shaped by its theory of human mind, or by its own affordances model? Like, would its weird alien worldview percolate into its theory-of-human-mind representations? Or would its alienness not be weird-theory-of-human-mind so much as everything else going on in shoggoth mind?
More generically, say there’s System X with at least moderate complexity. One generally intelligent creature learns to predict System X with N% accuracy, but from context A (which includes its purpose for learning System X / goals for it). Another generally intelligent creature learns how to predict System X with N% accuracy but from a very different context B (it has very different goals and a different background). To what degree would we expect their representations to be be similar / interpretable to one another? How does that change given the complexity of the system, the value of N, etc?
Anyway, I really just came here to drop this paper—https://arxiv.org/pdf/2311.13110.pdf—re: @Sodium’s wondering “some suitable loss function that incentivizes the model to represent its learned concepts in an easily readable form.” I’m curious about the same question, more from the applied standpoint of how to get a model to learn “good” representations faster. I haven’t played with it yet tho.
I tend to think that more-or-less how we interpret the world is the simplest way to interpret it (at least for the mesa-scale of people and technologies. I doubt there’s a dramatically different parsing that makes more sense. The world really seems to be composed of things made of things, that do things to things for reasons based on beliefs and goals. But this is an intuition.
Clever compressions of complex systems, and better representations of things outside of our evolved expertise, like particle phsyics, sociology and economics, seem quite possible.
Good citation; I meant to mention it. There’s a nice post on it.
If System X is of sufficient complexity / high dimensionality, it’s fair to say that there are many possible dimensional reductions, right? And not just globally better or worse options; instead, reductions that are more or less useful for a given context.
However, a shoggoth’s theory-of-human-mind context would probably be a lot of like our context, so it’d make sense that the representations would be similar.
Thanks! Yes, I think interpretability, activation-addition steering and representational engineering (I know less about this one) are all routes to GSLK alignment approaches.
Activation-addition steering isn’t my favorite route, because it’s not directly selecting selecting a goal representation in an important sense. There’s no steering subsystem with a representation goals. Humans have explicit representations of goals in two senses that LLMs lack, and this makes it potentially easier to control and understand what goals the system will pursue in a variety of situations. I wrote about this in Steering subsystems: capabilities, agency, and alignment.
As for your second question, it almost sounds like you’re describing large language models. They literally learn representations we can read. I think they have translucent thoughts in the sense that they’re highly if not perfectly interpretable. The word outputs don’t perfectly capture the underlying representations, but they capture most of it, most of the time. Redundancy checks can be applied to catch places where the underlying “thoughts” don’t match the outputs. I think. This needs more exploration.
That’s why, of the three GSLK approaches above, I favor aligning language model agents with a stack of approaches including external review relying on reading their train of thought.
Additional interpretability such as you suggest would be great. Steve Byrnes has talked about training additional interpetability outputs just like you suggest (I think) in his Intro to brain-like AGI alignment sequence. I think these approaches are great, but they impose a fairly high alignment tax. One thing I like about the GSLK approaches for brainlike AGI and language model agents is that they have very low alignment taxes in the sense of being computationally and conceptually easy to apply to the types of AGI I think we’re most likely to build first by default.
Thanks for the response!
I’m worried that instead of complicated LMA setups with scaffolding and multiple agents, labs are more likely to push for a single tool using LM agent, which seems cheaper and simpler. I think some sort of internal steering for a given LM based on learned knowledge discovered through interpretability tools is probably the most competitive method. I get your point that the existing method in LLMs aren’t necessarily re targeting some sort of searching method, but at the same time they don’t have to be? Since there isn’t this explicit search and evaluation process in the first place, I think of it more as a nudge guiding LLM hallucinations.
I was just thinking, a really ambitious goal would be apply some sort of GSLK steering to LLAMA and see if you could get it to perform well on the LLM leaderboard, similar to how there’s models there that’s just DPO applied to LLAMA.
What I’m envisioning is a single agent, with some scaffolding of episodic memory and executive function to make it more effective. If I’m right, that would be not the simplest, but the cheapest way to AGI, since it fills some gaps in the language model’s abilities without using brute force. I wrote about this vision of language model cognitive architectures here.
I’m realizing that the distinction between a minimal language model agent and the sort of language model cognitive architecture I think will work better is a real distinction, and most people assume with you that a language model agent will just be a powerful LLM prompted over and over with something like “keep thinking about that, and take actions or get data using these APIs when it seems useful”. That system will be much less explicitly goal directed than an LMA with additonal executive function to keep it on-task and therefore goal-directed.
I intend to write a post about that distinction.
On your original question, see also Kristin’s comment and the paper she suggests. It’s work on a modification of the transformer algorithm to make more easily interpretable representations. I meant to mention it, and Roger Dearnaley’s post on it. I do find this a promising route to better interpretability. The ideal foundation model for a safe agent would be a language model that’s also trained with an algorithm that encourages interpretable representations.
That’s interesting re: LLMs as having “conceptual interpretability” by their very nature. I guess that makes sense, since some degree of conceptual interpretability naturally emerges given 1) sufficiently large and diverse training set, 2) sparsity constraints. LLMs are both—definitely #1, and #2 given regularization and some practical upper bounds on total number of parameters. And then there is your point—that LLMs are literally trained to create output we can interpret.
I wonder about representations formed by a shoggoth. For the most efficient prediction of what humans want to see, the shoggoth would seemingly form representations very similar to ours. Or would it? Would its representations be more constrained by and therefore shaped by its theory of human mind, or by its own affordances model? Like, would its weird alien worldview percolate into its theory-of-human-mind representations? Or would its alienness not be weird-theory-of-human-mind so much as everything else going on in shoggoth mind?
More generically, say there’s System X with at least moderate complexity. One generally intelligent creature learns to predict System X with N% accuracy, but from context A (which includes its purpose for learning System X / goals for it). Another generally intelligent creature learns how to predict System X with N% accuracy but from a very different context B (it has very different goals and a different background). To what degree would we expect their representations to be be similar / interpretable to one another? How does that change given the complexity of the system, the value of N, etc?
Anyway, I really just came here to drop this paper—https://arxiv.org/pdf/2311.13110.pdf—re: @Sodium’s wondering “some suitable loss function that incentivizes the model to represent its learned concepts in an easily readable form.” I’m curious about the same question, more from the applied standpoint of how to get a model to learn “good” representations faster. I haven’t played with it yet tho.
I tend to think that more-or-less how we interpret the world is the simplest way to interpret it (at least for the mesa-scale of people and technologies. I doubt there’s a dramatically different parsing that makes more sense. The world really seems to be composed of things made of things, that do things to things for reasons based on beliefs and goals. But this is an intuition.
Clever compressions of complex systems, and better representations of things outside of our evolved expertise, like particle phsyics, sociology and economics, seem quite possible.
Good citation; I meant to mention it. There’s a nice post on it.
If System X is of sufficient complexity / high dimensionality, it’s fair to say that there are many possible dimensional reductions, right? And not just globally better or worse options; instead, reductions that are more or less useful for a given context.
However, a shoggoth’s theory-of-human-mind context would probably be a lot of like our context, so it’d make sense that the representations would be similar.