comprehensively interpreting networks [… aims to] identify all representations or circuits in a network or summarize the full computational graph of a neural network (whatever that might mean)
I think this is incredibly optimistic hope that I think need be challenged more.
On my model GPT-N has a mixture of a) crisp representation, b) fuzzy heuristics made are made crisp in GPT-(N+1) and c) noise and misgeneralizations. Unless we’re discussing models that perfectly fit their training distribution, I expect comprehensively interpreting networks involves untangling many competing fuzzy heuristics which are all imperfectly implemented. Perhaps you expect this to be possible? However, I’m pretty skeptical this is tractable and expect the best good interpretability work to not confront these completeness guarentees.
Related (I consider “mechanistic interpretability essentially solved” to be similar to your “comprehensive interpreting” goal)
This was a nice description, thanks!
However, regarding
I think this is incredibly optimistic hope that I think need be challenged more.
On my model GPT-N has a mixture of a) crisp representation, b) fuzzy heuristics made are made crisp in GPT-(N+1) and c) noise and misgeneralizations. Unless we’re discussing models that perfectly fit their training distribution, I expect comprehensively interpreting networks involves untangling many competing fuzzy heuristics which are all imperfectly implemented. Perhaps you expect this to be possible? However, I’m pretty skeptical this is tractable and expect the best good interpretability work to not confront these completeness guarentees.
Related (I consider “mechanistic interpretability essentially solved” to be similar to your “comprehensive interpreting” goal)