I’ve only skimmed this post, but I like it because I think it puts into words a fairly common model (that I disagree with). I’ve heard “it’s all just a stack of heuristics” as an explanation of neural networks and as a claim that all intelligence is this, from several people. (Probably I’m overinterpreting other people’s words to some extent, they probably meant a weaker/nuanced version. But like you say, it can be useful to talk about the strong version).
I think you’ve correctly identified the flaw in this idea (it isn’t predictive, it’s unfalsifiable, so it isn’t actually explaining anything even if it feels like it is). You don’t seem to think this is a fatal flaw. Why?
You seem to answer
However, the key interpretability-related claim is that heuristics based decompositions will be human-understandable, which is a more falsifiable claim.
But I don’t see why “heuristics based decompositions will be human-understandable” is an implication of the theory. As an extreme counterexample, logic gates are interpretable, but when stacked up into a computer they are ~uninterpretable. It looks to me like you’ve just tacked an interpretability hypothesis onto a heuristics hypothesis.
Thanks for reading my post! Here’s how I think this hypothesis is helpful:
It’s possible that we wouldn’t be able to understand what’s going on even if we had some perfect way to decompose a forward pass into interpretable constituent heuristics. I’m skeptical that this would be the case, mostly because I think (1) we can get a lot of juice out of auto-interp methods and (2) we probably wouldn’t need to simultaneously understand that many heuristics at the same time (which is the case for your logic gate example for modern computers). At the minimum, I would argue that the decomposed bag of heuristics is likely to be much more interpretable than the original model itself.
Suppose that the hypothesis is true, then it at least suggests that interpretability researchers should put in more efforts to try find and study individual heuristics/circuits, as opposed to the current more “feature-centric” framework. I don’t know how this would manifest itself exactly, but it felt like it’s worth saying. I believe that some of the empirical work I cited suggests that we might make more incremental progress if we focused on heuristics more right now.
I think the problem might be that you’ve given this definition of heuristic:
A heuristic is a local, interpretable, and simple function (e.g., boolean/arithmetic/lookup functions) learned from the training data. There are multiple heuristics in each layer and their outputs are used in later layers.
Taking this definition seriously, it’s easy to decompose a forward pass into such functions.
But you have a much more detailed idea of a heuristic in mind. You’ve pointed toward some properties this might have in your point (2), but haven’t put it into specific words.
Some options: A single heuristic is causally dependent on <5 heuristics below and influences <5 heuristics above. The inputs and outputs of heuristics are strong information bottlenecks with a limit of 30 bits. The function of a heuristic can be understood without reference to >4 other heuristics in the same layer. A single heuristic is used in <5 different ways across the data distribution. A model is made up of <50 layers of heuristics. Large arrays of parallel heuristics often output information of the same type.
Some combination of these (or similar properties) would turn the heuristics intuition into a real hypothesis capable of making predictions.
If you don’t go into this level of detail, it’s easy to trick yourself into thinking that (2) basically kinda follows from your definition of heuristics, when it really really doesn’t. And that will lead you to never discover the value of the heuristics intuition, if it is true, and never reject it if it is false.
I agree that if you put more limitations on what heuristics are and how they compose, you end up with a stronger hypothesis. I think it’s probably better to leave that out and try do some more empirical work before making a claim there though (I suppose you could say that the hypothesis isn’t actually making a lot of concrete predictions yet at this stage).
I don’t think (2) necessarily follows, but I do sympathize with your point that the post is perhaps a more specific version of the hypothesis that “we can understand neural network computation by doing mech interp.”
I’ve only skimmed this post, but I like it because I think it puts into words a fairly common model (that I disagree with). I’ve heard “it’s all just a stack of heuristics” as an explanation of neural networks and as a claim that all intelligence is this, from several people. (Probably I’m overinterpreting other people’s words to some extent, they probably meant a weaker/nuanced version. But like you say, it can be useful to talk about the strong version).
I think you’ve correctly identified the flaw in this idea (it isn’t predictive, it’s unfalsifiable, so it isn’t actually explaining anything even if it feels like it is). You don’t seem to think this is a fatal flaw. Why?
You seem to answer
But I don’t see why “heuristics based decompositions will be human-understandable” is an implication of the theory. As an extreme counterexample, logic gates are interpretable, but when stacked up into a computer they are ~uninterpretable. It looks to me like you’ve just tacked an interpretability hypothesis onto a heuristics hypothesis.
Thanks for reading my post! Here’s how I think this hypothesis is helpful:
It’s possible that we wouldn’t be able to understand what’s going on even if we had some perfect way to decompose a forward pass into interpretable constituent heuristics. I’m skeptical that this would be the case, mostly because I think (1) we can get a lot of juice out of auto-interp methods and (2) we probably wouldn’t need to simultaneously understand that many heuristics at the same time (which is the case for your logic gate example for modern computers). At the minimum, I would argue that the decomposed bag of heuristics is likely to be much more interpretable than the original model itself.
Suppose that the hypothesis is true, then it at least suggests that interpretability researchers should put in more efforts to try find and study individual heuristics/circuits, as opposed to the current more “feature-centric” framework. I don’t know how this would manifest itself exactly, but it felt like it’s worth saying. I believe that some of the empirical work I cited suggests that we might make more incremental progress if we focused on heuristics more right now.
I think the problem might be that you’ve given this definition of heuristic:
Taking this definition seriously, it’s easy to decompose a forward pass into such functions.
But you have a much more detailed idea of a heuristic in mind. You’ve pointed toward some properties this might have in your point (2), but haven’t put it into specific words.
Some options: A single heuristic is causally dependent on <5 heuristics below and influences <5 heuristics above. The inputs and outputs of heuristics are strong information bottlenecks with a limit of 30 bits. The function of a heuristic can be understood without reference to >4 other heuristics in the same layer. A single heuristic is used in <5 different ways across the data distribution. A model is made up of <50 layers of heuristics. Large arrays of parallel heuristics often output information of the same type.
Some combination of these (or similar properties) would turn the heuristics intuition into a real hypothesis capable of making predictions.
If you don’t go into this level of detail, it’s easy to trick yourself into thinking that (2) basically kinda follows from your definition of heuristics, when it really really doesn’t. And that will lead you to never discover the value of the heuristics intuition, if it is true, and never reject it if it is false.
I agree that if you put more limitations on what heuristics are and how they compose, you end up with a stronger hypothesis. I think it’s probably better to leave that out and try do some more empirical work before making a claim there though (I suppose you could say that the hypothesis isn’t actually making a lot of concrete predictions yet at this stage).
I don’t think (2) necessarily follows, but I do sympathize with your point that the post is perhaps a more specific version of the hypothesis that “we can understand neural network computation by doing mech interp.”