I is evaluated on utility for improving time-efficiency and accuracy in solving downstream tasks
There seems to be a gap between this informal description and your pseudo-code, since in the pseudo-code the parameters only parametrise the R&D agent . On the other hand is distinct and presumed to be not changing. At first reasoning from the pseudo-code I had the objection that the execution agent can’t be completely static, because it somehow has to make use of whatever clever interpretability outputs the R&D agent comes up with (e.g. SAEs don’t use themselves to solve OOD detection or whatever). Then I wondered if you wanted to bound the complexity of somewhere. Then I looked back and saw the formula which seems to cleverly bypass this by having the R&D agent have to do both steps but factoring its representation of .
However this does seem different from the pseudo-code. If this is indeed different, which one do you intend?
Edit: no matter, I should just read more closely clearly takes as input so I think I’m not confused. I’ll leave this comment here as a monument to premature question-asking.
Later edit: ok no I’m still confused. It seems doesn’t get used in your inner loop unless it is in fact (which in the pseudo-code means just a part of what was called in the preceding text). That is, when we update we update for the next round. In which case things fit with your original formula but having essentially factored into two pieces ( on the outside, on the inside) you are only allowing the inside piece to vary over the course of this process. So I think my original question still stands.
So to check the intuition here: we factor the interpretability algorithm into two pieces. The first piece never sees tasks and has to output some representation of the model . The second piece never sees the model and has to, given the representation and some prediction task for the original model perform well across a sufficiently broad range of such tasks. It is penalised for computation time in this second piece. So overall the loss is supposed to motivate
Discovering the capabilities of the model as operationalised by its performance on tasks, and also how that performance is affected by variations of those tasks (e.g. modifying the prompt for your Shapley values example, and for your elicitation example).
Representing those capabilities in a way that amortises the computational cost of mapping a given task onto this space of capabilities in order to make the above predictions (the computation time penalty in the second part).
This is plausible for the same reason that the original model can have good general performance: there are general underlying skills or capabilities that can be assembled to perform well on a wide range of tasks, and if you can discover those capabilities and their structure you should be able to generalise to predict other task performance and how it varies.
Indirectly there is a kind of optimisation pressure on the complexity of just because you’re asking this to be broadly useful (for a computationally penalised ) for prediction on many tasks, so by bounding the generalisation error you’re likely to bound the complexity of that representation.
I’m on board with that, but I think it is possible that some might agree this is a path towards automated research of something but not that the something is interpretability. After all, your need not be interpretable in any straightforward way. So implicitly the space of ’s you are searching over is constrained to something instrinsically reasonably interpretable?
Since later you say “human-led interpretability absorbing the scientific insights offered by I*” I guess not, and your point is that there are many safety-relevant applications of I*(M) even if it is not very human comprehensible.
Makes sense to me, thanks for the clarifications.
I found working through the details of this very informative. For what it’s worth, I’ll share here a comment I made internally at Timaeus about it, which is that in some ways this factorisation into M1 and M2 reminds me of the factorisation into the map m↦Sm from a model to its capability vector (this being the analogue of M2) and the map Sm↦σ−1(Em)=βTSm+α from capability vectors to downstream metrics (this being the analogue of M1) in Ruan et al’s observational scaling laws paper.
In your case the output metrics have an interesting twist, in that you don’t want to just predict performance but also in some sense variations of performance within a certain class (by e.g. varying the prompt), so it’s some kind of “stable” latent space of capabilities that you’re constructing.
Anyway, factoring the prediction of downstream performance/capabilities through some kind of latent space object I(M) in your case, or latent spaces of capabilities in Ruan et al’s case, seems like a principled way of thinking about the kind of object we want to put at the center of interpretability.
As an entertaining aside: as an algebraic geometer the proliferation of I1(M),I2(M),…’s i.e. “interpretability objects” between models M and downstream performance metrics reminds me of the proliferation of cohomology theories and the search for “motives” to unify them. That is basically interpretability for schemes!