Great stuff! Excited to see this extended and applied. I hope to dive deeper into this series and your followup work.
Came to the appendix for 2.2 on metrics, still feel curious about the metric choice.
I’m trying to figure out why this is wrong: “loss is not a good basis for a primary metric even though its worth looking at and intuitive, because it hides potentially large+important changes to the X-> Y mapping learned by the network that have equivalent loss. Instead, we should just measure how yscrubbed_i has changed from yhat_i (original model) at each xi we care about.” I think I might have heard people call this a “function space” view (been a while since I read that stuff) but that is confusing wording with your notation of f.
Dumb regression example. Suppose my training dataset is scalar (x,y) pairs that almost all fall along y=sin(x). I fit a humungo network N and when i plot N(x) for all my xs I see a great approximation of sin(x). I pick a weird subset of my data where instead of y=sin(x), this data is all y=0 (as far as I can tell this is allowed? I don’t recall restrictions on training distribution having to match) and use it to compute my mse loss during scrubbing. I find a hypothesis that recovers 100% of performance! But I plot and it looks like cos(x), which unless I’m tired has the same MSE from the origin in expectation.
I probably want to know if I my subnetwork is actually computing a very different y for the same exact x, right? Even if it happens to have a low or even equal or better loss?
(I see several other benefits of comparing model output against scrubbed model output directly, for instance allowing application to data which is drawn from your target distribution but not labelled)
Even if this is correct, I doubt this matters much right now compared to the other immediate priorities for this work, but I’d hope someone was thinking about it and/ or I can become less confused about why the loss is justified
In this work, we prefer the expected loss. Suppose that one of the drivers of the model’s behavior is noise: trying to capture the full distribution would require us to explain what causes the noise. For example, you’d have to explain the behavior of a randomly initialized model despite the model doing ‘nothing interesting’.
Another limitation is that causal scrubbing does not guarantee that it will reject a hypothesis that is importantly false or incomplete.
That being said, it’s worth noting that your performance metric gives you an important degree of freeodm. In your case, if the goal is to explain “why the predictor explains y = sin(x)”, it makes more sense to use the performance metric f(x) = |sin(x) - model(x)|. If you use the metric (model(x) - y|)^2 you’re trying to explain, why does the predictor sin(x) do as well (as poorly) as it does. In which case, yes, cos(x) does as poorly as sin(x) on the data.
I was aiming to construct an example which would illustrate how the loss metric would break in a black box setting (where X and Y are too gnarly to vis). In that case you have no clue that your model implements sin(x), and so I dont see how that could be the goal. In the black box setting you do get access to distance between scrubbed y and y_true (loss) and distance between scrubbed_y and original_y (my proposal, lets call it output distance). When you look at loss, it is possible for causal scrubbing to yield an explanation of the model’s performance which, from my perspective, is an obviously bad one in that it it causes the function implemented by the model to be radically different.
If that is one of classes of “importantly false or i complete hypotheses” then why not check the predicted ys against each other and favor hypotheses that have both close outputs and low loss?
(I think these converge to the same thing as the original model’s loss go to zero, but prior to that driving output distance to zero is the only way to get an equivalent function to the original network, I claim)
If that is one of classes of “importantly false or i complete hypotheses” then why not check the predicted ys against each other and favor hypotheses that have both close outputs and low loss?
It doesn’t actually fix the problem! Suppose that your model behavior worked as follows:
f(x)=sin(x)+sin(x)−sin(x)
That is, there are three components, two of which exhibit the behavior and one of which is inhibitory. (For example, we see something similar in the IOI paper with name movers and backup name movers.) Then if you find a single circuit of the form sin(x), you would still be missing important parts of the network. That is, close model outputs doesn’t guarantee that you’ve correctly captured all the considerations, since you can still miss considerations that “cancel out”. (Though they will have fewer false positives.)
However, swapping from low loss to close outputs requires sacrificing other nice properties you want. For example, while loss is an inherently meaningful metric, KL distance or L2 distance to the original outputs is rarely the thing you care about. And the biggest issue is that you have to explain a bunch of noise, which we might not care about.
Of course, I still encourage people to think about what their metrics are actually measuring, and what they could be failing to capture. And if your circuit is good according to one metric but bad according to all of the others, there’s a good chance that you’ve overfit to that metric!
Great stuff! Excited to see this extended and applied. I hope to dive deeper into this series and your followup work.
Came to the appendix for 2.2 on metrics, still feel curious about the metric choice.
I’m trying to figure out why this is wrong: “loss is not a good basis for a primary metric even though its worth looking at and intuitive, because it hides potentially large+important changes to the X-> Y mapping learned by the network that have equivalent loss. Instead, we should just measure how yscrubbed_i has changed from yhat_i (original model) at each xi we care about.” I think I might have heard people call this a “function space” view (been a while since I read that stuff) but that is confusing wording with your notation of f.
Dumb regression example. Suppose my training dataset is scalar (x,y) pairs that almost all fall along y=sin(x). I fit a humungo network N and when i plot N(x) for all my xs I see a great approximation of sin(x). I pick a weird subset of my data where instead of y=sin(x), this data is all y=0 (as far as I can tell this is allowed? I don’t recall restrictions on training distribution having to match) and use it to compute my mse loss during scrubbing. I find a hypothesis that recovers 100% of performance! But I plot and it looks like cos(x), which unless I’m tired has the same MSE from the origin in expectation.
I probably want to know if I my subnetwork is actually computing a very different y for the same exact x, right? Even if it happens to have a low or even equal or better loss?
(I see several other benefits of comparing model output against scrubbed model output directly, for instance allowing application to data which is drawn from your target distribution but not labelled)
Even if this is correct, I doubt this matters much right now compared to the other immediate priorities for this work, but I’d hope someone was thinking about it and/ or I can become less confused about why the loss is justified
I think this is a great question.
First, we do mention this a bit in 2.2:
We also discuss this a bit in the Limitations section of the main post, specifically, the part starting:
That being said, it’s worth noting that your performance metric gives you an important degree of freeodm. In your case, if the goal is to explain “why the predictor explains y = sin(x)”, it makes more sense to use the performance metric f(x) = |sin(x) - model(x)|. If you use the metric (model(x) - y|)^2 you’re trying to explain, why does the predictor sin(x) do as well (as poorly) as it does. In which case, yes, cos(x) does as poorly as sin(x) on the data.
Really appreciate the response :)
Totally acknowledge the limitations you outlined.
I was aiming to construct an example which would illustrate how the loss metric would break in a black box setting (where X and Y are too gnarly to vis). In that case you have no clue that your model implements sin(x), and so I dont see how that could be the goal. In the black box setting you do get access to distance between scrubbed y and y_true (loss) and distance between scrubbed_y and original_y (my proposal, lets call it output distance). When you look at loss, it is possible for causal scrubbing to yield an explanation of the model’s performance which, from my perspective, is an obviously bad one in that it it causes the function implemented by the model to be radically different.
If that is one of classes of “importantly false or i complete hypotheses” then why not check the predicted ys against each other and favor hypotheses that have both close outputs and low loss?
(I think these converge to the same thing as the original model’s loss go to zero, but prior to that driving output distance to zero is the only way to get an equivalent function to the original network, I claim)
It doesn’t actually fix the problem! Suppose that your model behavior worked as follows:
f(x)=sin(x)+sin(x)−sin(x)
That is, there are three components, two of which exhibit the behavior and one of which is inhibitory. (For example, we see something similar in the IOI paper with name movers and backup name movers.) Then if you find a single circuit of the form sin(x), you would still be missing important parts of the network. That is, close model outputs doesn’t guarantee that you’ve correctly captured all the considerations, since you can still miss considerations that “cancel out”. (Though they will have fewer false positives.)
However, swapping from low loss to close outputs requires sacrificing other nice properties you want. For example, while loss is an inherently meaningful metric, KL distance or L2 distance to the original outputs is rarely the thing you care about. And the biggest issue is that you have to explain a bunch of noise, which we might not care about.
Of course, I still encourage people to think about what their metrics are actually measuring, and what they could be failing to capture. And if your circuit is good according to one metric but bad according to all of the others, there’s a good chance that you’ve overfit to that metric!