Simon-Pepin Lehalleur weighs in on the DevInterp Discord:
I think his overall position requires taking degeneracies seriously: he seems to be claiming that there is a lot of path dependency in weight space, but very little in function space 😄
In general his position seems broadly compatible with DevInterp:
the development of structures is controlled by loss landscape geometry
and also possibly in more complicated cases by the landscapes of “effective losses” corresponding to subcircuits...
This perspective certainly is incompatible with a naive SGD = Bayes = Watanabe’s global SLT learning process, but I don’t think anyone has (ever? for a long time?) made that claim for non toy models.
It seems that the difference with DevInterp is that
we are more optimistic that it is possible to understand which geometric observables of the landscape control the incremental development of circuits
we expect, based on local SLT considerations, that those observables have to do with the singularity theory of the loss and also of sub/effective losses, with the LLC being the most important but not the only one
we dream that it is possible to bootstrap this to a full fledged S4 correspondence, or at least to get as close as we can.
Ok, no pb. You can also add the following :
I am sympathetic but also unsatisfied with a strong empiricist position about deep learning. It seems to me that it is based on a slightly misapplied physical, and specifically thermodynamical intuition. Namely that we can just observe a neural network and see/easily guess what the relevant “thermodynamic variables” of the system.
For ordinary 3d physical systems, we tend to know or easily discover those thermodynamic variables through simple interactions/observations. But a neural network is an extremely high-dimensional system which we can only “observe” through mathematical tools. The loss is clearly one such thermodynamic variable, but if we expect NN to be in some sense stat mech systems it can’t be the only one (otherwise the learning process would be much more chaotic and unpredictable). One view of DevInterp is that we are “just” looking for those missing variables...
Simon-Pepin Lehalleur weighs in on the DevInterp Discord:
I think his overall position requires taking degeneracies seriously: he seems to be claiming that there is a lot of path dependency in weight space, but very little in function space 😄
In general his position seems broadly compatible with DevInterp:
models learn circuits/algorithmic structure incrementally
the development of structures is controlled by loss landscape geometry
and also possibly in more complicated cases by the landscapes of “effective losses” corresponding to subcircuits...
This perspective certainly is incompatible with a naive SGD = Bayes = Watanabe’s global SLT learning process, but I don’t think anyone has (ever? for a long time?) made that claim for non toy models.
It seems that the difference with DevInterp is that
we are more optimistic that it is possible to understand which geometric observables of the landscape control the incremental development of circuits
we expect, based on local SLT considerations, that those observables have to do with the singularity theory of the loss and also of sub/effective losses, with the LLC being the most important but not the only one
we dream that it is possible to bootstrap this to a full fledged S4 correspondence, or at least to get as close as we can.
Ok, no pb. You can also add the following :
I am sympathetic but also unsatisfied with a strong empiricist position about deep learning. It seems to me that it is based on a slightly misapplied physical, and specifically thermodynamical intuition. Namely that we can just observe a neural network and see/easily guess what the relevant “thermodynamic variables” of the system.
For ordinary 3d physical systems, we tend to know or easily discover those thermodynamic variables through simple interactions/observations. But a neural network is an extremely high-dimensional system which we can only “observe” through mathematical tools. The loss is clearly one such thermodynamic variable, but if we expect NN to be in some sense stat mech systems it can’t be the only one (otherwise the learning process would be much more chaotic and unpredictable). One view of DevInterp is that we are “just” looking for those missing variables...