I like 1) and think this is worth doing. I believe that Mechanistic Interpretability researchers are already somewhat concerned about insight not generalising from toy models to larger models let alone to novel architectures so work on model agnostic levels could be useful in the same paradigm too.
Something to note, I’m not confident about the track record of model agnostic methods (such as saliency maps). I’ve heard from at least one ML researcher that saliency maps have a poor track record and have been shown to be unreliable in a variety of experiments. Do you know of any other examples of model agnostic interpretability methods which you think might be very useful? Maybe saliency maps don’t matter as much as the idea of model agnostic methods in which case feel free to disregard this. I’ve heard before of interest in generally approaching models as block boxes “ML psychologist” while we try to understand them so don’t think the value of this approach lies too heavily in specific prior methods.
With respect to 2), while I think this is reasonable, I believe the salient point is whether models from the current paradigm are sufficiently dangerous fast enough that they warrant more/less focus. Theoretically, the space of possible ML architecture paradigms producing doom is large and the order in which they will manifest is roughly the order in which we should solve them. (ie: align current systems, then new paradigm systems, then new paradigm systems, each buying time).
However, I think there are good enough reasons to work on model agnostic methods that don’t rely on AGI doom originating in a new paradigm.
Hi Joseph! I’ll briefly address the saliency map concern here – it likely originates from this paper, which showed that some types of saliency mapping methods had no more explanatory power than edge detectors. It’s a great paper, and worth a read. The key thing to note is that this was only true of some gradient-based saliency mapping methods, which are, of course, model-specific. Gradients can be deceptive! Model agnostic, perturbation-based saliency mapping doesn’t suffer from the same kind of problems – see p.12 here.
Thanks Jessica!
I like 1) and think this is worth doing. I believe that Mechanistic Interpretability researchers are already somewhat concerned about insight not generalising from toy models to larger models let alone to novel architectures so work on model agnostic levels could be useful in the same paradigm too.
Something to note, I’m not confident about the track record of model agnostic methods (such as saliency maps). I’ve heard from at least one ML researcher that saliency maps have a poor track record and have been shown to be unreliable in a variety of experiments. Do you know of any other examples of model agnostic interpretability methods which you think might be very useful? Maybe saliency maps don’t matter as much as the idea of model agnostic methods in which case feel free to disregard this. I’ve heard before of interest in generally approaching models as block boxes “ML psychologist” while we try to understand them so don’t think the value of this approach lies too heavily in specific prior methods.
With respect to 2), while I think this is reasonable, I believe the salient point is whether models from the current paradigm are sufficiently dangerous fast enough that they warrant more/less focus. Theoretically, the space of possible ML architecture paradigms producing doom is large and the order in which they will manifest is roughly the order in which we should solve them. (ie: align current systems, then new paradigm systems, then new paradigm systems, each buying time).
However, I think there are good enough reasons to work on model agnostic methods that don’t rely on AGI doom originating in a new paradigm.
Overall, very exciting! good luck!
Hi Joseph! I’ll briefly address the saliency map concern here – it likely originates from this paper, which showed that some types of saliency mapping methods had no more explanatory power than edge detectors. It’s a great paper, and worth a read. The key thing to note is that this was only true of some gradient-based saliency mapping methods, which are, of course, model-specific. Gradients can be deceptive! Model agnostic, perturbation-based saliency mapping doesn’t suffer from the same kind of problems – see p.12 here.