It reminds me (not only of my own writing on a similar theme) but of another one of these viewpoints/axes along which to carve interpretability work that is mentioned in this post by jylin04:
...a dream for interpretability research would be if we could reverse-engineer our future AI systems into human-understandable code. If we take this dream seriously, it may be helpful to split it into two parts: first understanding what “programming language” an architecture + learning algorithm will end up using at the end of training, and then what “program” a particular training regimen will lead to in that language [7]. It seems to me that by focusing on specific trained models, most interpretability research discussed here is of the second type. But by constructing an effective theory for an entire class of architecture that’s agnostic to the choice of dataset, PDLT is a rare example of the first type.
I don’t necessarily totally agree with her phrasing but it does feel a bit like we are all gesturing at something vaguely similar (and I do agree with her that PDLT-esque work may have more insights in this direction than some people on our side of the community have appreciated).
FWIW, in a recent comment reply to Joseph Bloom, I also ended up saying a bit more about why I don’t actually see myself working much more in this direction, despite it seeming very interesting, but I’m still on the fence about that. (And one last point that didn’t make it into that comment is the difficulties posed by a world in which increasingly the plucky bands of interpretability researchers on the fringes literally don’t even know what the cutting edge architectures and training processes in the biggest labs even are.
Interesting thoughts!
It reminds me (not only of my own writing on a similar theme) but of another one of these viewpoints/axes along which to carve interpretability work that is mentioned in this post by jylin04:
I don’t necessarily totally agree with her phrasing but it does feel a bit like we are all gesturing at something vaguely similar (and I do agree with her that PDLT-esque work may have more insights in this direction than some people on our side of the community have appreciated).
FWIW, in a recent comment reply to Joseph Bloom, I also ended up saying a bit more about why I don’t actually see myself working much more in this direction, despite it seeming very interesting, but I’m still on the fence about that. (And one last point that didn’t make it into that comment is the difficulties posed by a world in which increasingly the plucky bands of interpretability researchers on the fringes literally don’t even know what the cutting edge architectures and training processes in the biggest labs even are.