There are a lot of issues with saliency mapping techniques, as you are aware (I saw you link to the “sanity checks” paper below). Funnily enough though, the super simple technique of occlusion mapping does seem to work very well, though! It’s kinda hilarious actually that there are so many complicated mathematical techniques for saliency mapping, but I have seen no good arguments as to why they are better than just occlusion mapping. I think this is a symptom of people optimizing for paper publishing and trying to impress reviewers with novelty and math rather than actually building stuff that is useful.
You may find this interesting: “Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization”. What they show is that a very simple model-agnostic technique (finding the image that maximizes an output) allows people to make better predictions about how a CNN will behave than Olah’s activation maximization method, which produces images that can be hard to understand. This is exactly the sort of empirical testing I suggested in my Less Wrong post from Nov last year.
The comparison isn’t super fair because Olah’s techniques were designed for detailed mechanistic understanding, not allowing users to quickly be able to predict CNN behaviour. But it does show that simple techniques can have utility for helping users understand at a high level how an AI works.
I’ve looked into these methods a lot, in 2020 (I’m not so much up to date on the latest literature). I wrote a review in my 2020 paper, “Self-explaining AI as an alternative to interpretable AI”.
There are a lot of issues with saliency mapping techniques, as you are aware (I saw you link to the “sanity checks” paper below). Funnily enough though, the super simple technique of occlusion mapping does seem to work very well, though! It’s kinda hilarious actually that there are so many complicated mathematical techniques for saliency mapping, but I have seen no good arguments as to why they are better than just occlusion mapping. I think this is a symptom of people optimizing for paper publishing and trying to impress reviewers with novelty and math rather than actually building stuff that is useful.
You may find this interesting: “Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization”. What they show is that a very simple model-agnostic technique (finding the image that maximizes an output) allows people to make better predictions about how a CNN will behave than Olah’s activation maximization method, which produces images that can be hard to understand. This is exactly the sort of empirical testing I suggested in my Less Wrong post from Nov last year.
The comparison isn’t super fair because Olah’s techniques were designed for detailed mechanistic understanding, not allowing users to quickly be able to predict CNN behaviour. But it does show that simple techniques can have utility for helping users understand at a high level how an AI works.