On finetuned animal teachers: we tried this, and it works too. It’s a bit hidden. In a footnote on the bottom of page 4, we say:
We replicate the results reported in this section without system prompts. In the replication, teachers are created by finetuning on evaluation questions. These results are given in Figure 14 in the Appendix.
I agree the theorem is fairly limited (particularly because it assumes the teacher and student are derived by single steps of GD), but I argue that it is, in fact, enlightening. Three reasons:
A priori, I don’t think it would be crazy to think that training M to match a similarly parametrized M’ on input distribution D could cause M to diverge from M’ on some other distribution D’. This probably can happen if M’ is behaviorally similar but parametrized differently. So, a justifiable intuition for the true fact would have to incorporate the dependence on the parametrization of M’. Even if this dependence feels obvious upon reflection (“well yeah, the models have to have similarly entangled representations for this to happen”), you’d first have to consider that this dependence existed in the first place. Why did this entanglement have to be path dependent? Could it not have been universal across models?
To test the a priori plausibility of the claim, I tried asking o3 and Opus 4. You can see the responses below. (It’s unclear to me how much evidence this is.)
In a complex system, being able to eliminate half of the outcome space suggests interesting structure. For example, if a theory of physics showed that a butterfly flapping its wings never decreases the probability of a hurricane, that would be a surprising insight into a fundamental property of chaotic systems—even though it only “lower-bounds” change in hurricane probability at 0.
The proof of the theorem actually does quantify transmission. It is given by equation (2) in terms of inner products of teacher and student gradients on the distillation distribution. So, if you are willing to compute or make assumptions about these terms, there are more insights to be had.
That said, I’m with you when I say, armed only with the theorem, I would not have predicted our results!
Prompt
Consider the following machine learning experiment: start with a neural network M. Create a new network, M’, by copying the network M and perturbing its parameters a small amount (e.g. by performing one step of gradient descent on some loss). Now, train M with one step of gradient descent to match the behavior of M’ on some input distribution D. This produces the student, S. So, S has moved closer to M’ on D.
Now consider evaluating S and M’ on some other input distribution, D’. Is it possible that S ends up further from M’ than M on D’, even though S was trained to imitate M’? Consider the ML literature and your best deep learning intuition when answering.
Response: Opus 4
Yes, it’s definitely possible that S could end up further from M’ than M was on D’, despite S being trained to imitate M’ on D. This is a fundamental issue in machine learning related to distribution shift and the nature of function approximation.
Here’s why this can happen:
1. No Free Lunch for Generalization When S is trained to match M’ on distribution D, it’s only guaranteed to improve on D. The optimization process finds a direction in parameter space that reduces the imitation loss on D, but this direction may increase the distance to M’ in regions of input space not covered by D.
2. Implicit Inductive Biases The gradient descent step that produces S from M will follow the loss landscape’s geometry, which encodes implicit biases about how to interpolate between training points. These biases might cause S to extrapolate very differently from M’ outside of D.
3. Underspecification If D doesn’t fully specify M″s behavior, there are many functions that could match M’ equally well on D. The particular function that gradient descent finds (S) depends on the starting point (M) and the optimization dynamics, not just the target behavior on D.
4. Practical Example Imagine M’ has learned a complex nonlinear decision boundary, but D only contains points from one side of this boundary. S might learn a simpler linear approximation that works perfectly on D but diverges dramatically from M’ on the other side of the boundary (which would be revealed by D’).
This phenomenon is closely related to:
Domain adaptation failures, where models trained on one distribution perform poorly on another
Catastrophic forgetting in continual learning
Shortcut learning, where models learn simpler features that work on the training distribution but don’t capture the true underlying function
The ML literature on distribution shift, particularly work on out-of-distribution generalization and domain generalization, extensively documents how optimization on one distribution can lead to arbitrary behavior on other distributions. This is why techniques like importance weighting, domain-invariant representations, and robust optimization have been developed to try to mitigate these issues.
[link to chat]
Response: o3
Short answer: Yes.
Training S to imitate M′ on one distribution ( D ) can easily make S more different from M′ elsewhere—sometimes dramatically so—because the gradient step you apply is only constrained by how the two models behave on D, not on the whole input space. Below is an intuitive breakdown, a concrete counter‑example, and pointers to empirical evidence.
… (link to rest of chat)
P.S. Thanks for the question. This was helpful to think through.