A brief research note by Chris Olah about the point of mechanistic interpretability research. Introduction and table of contents are below.
Interpretability Dreams
An informal note on the relationship between superposition and distributed representations by Chris Olah. Published May 24th, 2023.
Our present research aims to create a foundation for mechanistic interpretability research. In particular, we’re focused on trying to resolve the challenge of superposition. In doing so, it’s important to keep sight of what we’re trying to lay the foundations for. This essay summarizes those motivating aspirations – the exciting directions we hope will be possible if we can overcome the present challenges.
We aim to offer insight into our vision for addressing mechanistic interpretability’s other challenges, especially scalability. Because we have focused on foundational issues, our longer-term path to scaling interpretability and tackling other challenges has often been obscure. By articulating this vision, we hope to clarify how we might resolve limitations, like analyzing massive neural networks, that might naively seem intractable in a mechanistic approach.
Before diving in, it’s worth making a few small remarks. Firstly, essentially all the ideas in this essay were previously articulated, but buried in previous papers. Our goal is just to surface those implicit visions, largely by quoting relevant parts. Secondly, it’s important to note that everything in this essay is almost definitionally extremely speculative and uncertain. It’s far from clear that any of it will ultimately be possible. Finally, since the goal of this essay is to lay out our personal vision of what’s inspiring to us, it may come across as a bit grandiose – we hope that it can be understood as simply trying to communicate subjective excitement in an open way.
Overview
An Epistemic Foundation—Mechanistic interpretability is a “microscopic” theory because it’s trying to build a solid foundation for understanding higher-level structure, in an area where it’s very easy for us as researchers to misunderstand.
What Might We Build on Such a Foundation? - Many tantalizing possibilities for research exist (and have been preliminarily demonstrated in InceptionV1), if only we can resolve superposition and identify the right features and circuits in a model.
Larger Scale Structure—It seems likely that there is a bigger picture, more abstract story that can be built on top of our understanding of features and circuits. Something like organs in anatomy or brain regions in neuroscience.
Universality—It seems likely that many features and circuits are universal, forming across different neural networks trained on similar domains. This means that lessons learned studying one model give us footholds in future models.
Bridging the Microscopic to the Macroscopic—We’re already seeing that some microscopic, mechanistic discoveries (such as induction heads) have significant macroscopic implications. This bridge can likely be expanded as we pin down the foundations, turning our mechanistic understanding into something relevant to machine learning more broadly.
Automated Interpretability—It seems very possible that AI automation of interpretability may help it scale to large models if all else fails (although aesthetically, we might prefer other paths).
[Linkpost] Interpretability Dreams
Link post
A brief research note by Chris Olah about the point of mechanistic interpretability research. Introduction and table of contents are below.
Interpretability Dreams
An informal note on the relationship between superposition and distributed representations by Chris Olah. Published May 24th, 2023.
Our present research aims to create a foundation for mechanistic interpretability research. In particular, we’re focused on trying to resolve the challenge of superposition. In doing so, it’s important to keep sight of what we’re trying to lay the foundations for. This essay summarizes those motivating aspirations – the exciting directions we hope will be possible if we can overcome the present challenges.
We aim to offer insight into our vision for addressing mechanistic interpretability’s other challenges, especially scalability. Because we have focused on foundational issues, our longer-term path to scaling interpretability and tackling other challenges has often been obscure. By articulating this vision, we hope to clarify how we might resolve limitations, like analyzing massive neural networks, that might naively seem intractable in a mechanistic approach.
Before diving in, it’s worth making a few small remarks. Firstly, essentially all the ideas in this essay were previously articulated, but buried in previous papers. Our goal is just to surface those implicit visions, largely by quoting relevant parts. Secondly, it’s important to note that everything in this essay is almost definitionally extremely speculative and uncertain. It’s far from clear that any of it will ultimately be possible. Finally, since the goal of this essay is to lay out our personal vision of what’s inspiring to us, it may come across as a bit grandiose – we hope that it can be understood as simply trying to communicate subjective excitement in an open way.
Overview
An Epistemic Foundation—Mechanistic interpretability is a “microscopic” theory because it’s trying to build a solid foundation for understanding higher-level structure, in an area where it’s very easy for us as researchers to misunderstand.
What Might We Build on Such a Foundation? - Many tantalizing possibilities for research exist (and have been preliminarily demonstrated in InceptionV1), if only we can resolve superposition and identify the right features and circuits in a model.
Larger Scale Structure—It seems likely that there is a bigger picture, more abstract story that can be built on top of our understanding of features and circuits. Something like organs in anatomy or brain regions in neuroscience.
Universality—It seems likely that many features and circuits are universal, forming across different neural networks trained on similar domains. This means that lessons learned studying one model give us footholds in future models.
Bridging the Microscopic to the Macroscopic—We’re already seeing that some microscopic, mechanistic discoveries (such as induction heads) have significant macroscopic implications. This bridge can likely be expanded as we pin down the foundations, turning our mechanistic understanding into something relevant to machine learning more broadly.
Automated Interpretability—It seems very possible that AI automation of interpretability may help it scale to large models if all else fails (although aesthetically, we might prefer other paths).
The End Goals—Ultimately, we hope this work can eventually contribute to safety and also reveal beautiful structure inside neural networks.