One would really want to know if the complexity measure can predict ‘emergence’ of capabilities like inner-monologue, particularly if you can spot previously-unknown capabilities emerging which may not be covered in any of your existing benchmarks.
That’s our hope as well. Early ongoing work on toy transformers trained to perform linear regression seems to bear out that lambdahat can reveal transitions where the loss can’t.
But this type of ‘emergence’ tends to happen with such expensive models that the available checkpoints are too separated to be informative (if you get an emergence going from 1b vs 10b vs 100b, what does it mean to compute a complexity measure there? You’d really want to compare them at wherever the emergence actually really happens, like 73.5b vs 74b, or whatever.)
The kind of emergence we’re currently most interested in is emergence over training time, which makes studying these transitions much more tractable (the main cost you’re paying is storage for checkpoints, and storage is cheap). It’s still a hurdle in that we have to start training large models ourselves (or setting up collaborations with other labs).
But the induction bump happens at pretty small (ie. cheap) model sizes, so it could be replicated many times and in many ways within-training-run and across training-runs, and one see how the complexity metric reflects or predicts the induction bump. Is that one of the ‘hidden’ transitions you plan to test? And if not, why not?
The induction bump is one of the main things we’re looking into now.
That’s our hope as well. Early ongoing work on toy transformers trained to perform linear regression seems to bear out that lambdahat can reveal transitions where the loss can’t.
The kind of emergence we’re currently most interested in is emergence over training time, which makes studying these transitions much more tractable (the main cost you’re paying is storage for checkpoints, and storage is cheap). It’s still a hurdle in that we have to start training large models ourselves (or setting up collaborations with other labs).
The induction bump is one of the main things we’re looking into now.