Do you have in mind a way to weigh sequential learning into the actual prior?
Dmitry:
good question! We haven’t thought about an explicit complexity measure that would give this prior, but a very loose approximation that we’ve been keeping in the back of our minds could be a Turing machine/Boolean circuit version of the “BIMT” weight penalty from this paper https://arxiv.org/abs/2305.08746 (which they show encourages modularity at least in toy models)
Response:
Hmm, BIMT seems to only be about intra-layer locality. It would certainly encourage learning an ensemble of features, but I’m not sure if it would capture the interesting bit, which I think is the fact that features are built up sequentially from earlier to later layers and changes are only accepted if they improve local loss.
I’m thinking about something like an existence of a relatively smooth scaling law (?) as the criterion.
So, just some smoothness constraint that would basically integrate over paths SGD could take.
From a conversation on Discord:
Dmitry:
Response: