What I’m envisioning is a single agent, with some scaffolding of episodic memory and executive function to make it more effective. If I’m right, that would be not the simplest, but the cheapest way to AGI, since it fills some gaps in the language model’s abilities without using brute force. I wrote about this vision of language model cognitive architectures here.
I’m realizing that the distinction between a minimal language model agent and the sort of language model cognitive architecture I think will work better is a real distinction, and most people assume with you that a language model agent will just be a powerful LLM prompted over and over with something like “keep thinking about that, and take actions or get data using these APIs when it seems useful”. That system will be much less explicitly goal directed than an LMA with additonal executive function to keep it on-task and therefore goal-directed.
I intend to write a post about that distinction.
On your original question, see also Kristin’s comment and the paper she suggests. It’s work on a modification of the transformer algorithm to make more easily interpretable representations. I meant to mention it, and Roger Dearnaley’s post on it. I do find this a promising route to better interpretability. The ideal foundation model for a safe agent would be a language model that’s also trained with an algorithm that encourages interpretable representations.
What I’m envisioning is a single agent, with some scaffolding of episodic memory and executive function to make it more effective. If I’m right, that would be not the simplest, but the cheapest way to AGI, since it fills some gaps in the language model’s abilities without using brute force. I wrote about this vision of language model cognitive architectures here.
I’m realizing that the distinction between a minimal language model agent and the sort of language model cognitive architecture I think will work better is a real distinction, and most people assume with you that a language model agent will just be a powerful LLM prompted over and over with something like “keep thinking about that, and take actions or get data using these APIs when it seems useful”. That system will be much less explicitly goal directed than an LMA with additonal executive function to keep it on-task and therefore goal-directed.
I intend to write a post about that distinction.
On your original question, see also Kristin’s comment and the paper she suggests. It’s work on a modification of the transformer algorithm to make more easily interpretable representations. I meant to mention it, and Roger Dearnaley’s post on it. I do find this a promising route to better interpretability. The ideal foundation model for a safe agent would be a language model that’s also trained with an algorithm that encourages interpretable representations.