When you have enough real-world data, you don’t need or want to store it because of diminishing returns on retraining compared to grabbing a fresh datapoint from the firehose. (It’s worth noting that no one in the large language model space has ever ‘used up’ all the text available to them in datasets like The Pile, or even done more than 1 epoch over the full dataset they used.) This is also good for users if they don’t have to keep around the original dataset to sample maintenance batches from while doing more training.
This would be the main crux, actually a tremendously important crux. I take this means that models largely would be very far off from an overparameterized regime compared to the data? I expect operating in an overparameterized regime to give a lot more capabilities and currently considered overfitting to the dataset as almost a need, whereas you seem to indicate this is an unreasonable assumption to make?
If so, erm, not only just catastrophic forgetting, but a lot of stuff I’ve seen people in AI alignment forum, base their intuitions on could be potentially thrown in the bin. Eg: I’m more confident in catastrophic forgetting having it’s effect when overfitted on the past data. If one cannot even properly learn past data but only frequently occuring patterns from it, those patterns might be too repetitively occuring to forget. But then, deep networks could do a lot better performance-wise by overfitting the dataset and exhaustively trying to remember the less-frequent patterns as well.
..it gets easier for them to store arbitrary capabilities without interference in part because the better representations they learn mean that there is much less to store/learn for any new task, which will share a lot of structure.
Here, the problem of catastrophic forgetting would not be on downstream learning tasks, it would be on updating this learnt representation to newer tasks.
The grokking paper is definitely preliminary. No one expected that and I’m not aware of any predictions of that (or patient teacher*) even if we can guess about a wide-basin/saddle-point interpretation. I don’t have a list of spin-glass papers because I distrust such math/theory papers and haven’t found them to be helpful.
Very fair, cool. Thanks, those five were nice illustrations, although I’ll need some time to digest the nature of non-linear dynamics. I’ve bookmarked it for an interesting trip someday.
I’m not sure how useful transparency tools would be. They can’t tell you anything about adversarial examples. Do they even diagnose neural backdoors yet? If they can’t find actual extremely sharp decision boundaries around specific inputs, hard to see how they could help you understand what an arbitrary SGD update does to decision boundaries across all inputs.
In this case, I deferred it as I don’t understand what’s really going on in transparency work.
But more generally speaking: Ditto, I sortof believe this to a large degree. I was trying to highlight this point in Section ‘Application: Transparency’. I notice I’m significantly more pessimistic than the median person on AI alignment forum, so there are some cruxes which I cannot put my finger on. Could you elaborate a bit more on your thoughts?
This would be the main crux, actually a tremendously important crux. I take this means that models largely would be very far off from an overparameterized regime compared to the data? I expect operating in an overparameterized regime to give a lot more capabilities and currently considered overfitting to the dataset as almost a need, whereas you seem to indicate this is an unreasonable assumption to make?
If so, erm, not only just catastrophic forgetting, but a lot of stuff I’ve seen people in AI alignment forum, base their intuitions on could be potentially thrown in the bin. Eg: I’m more confident in catastrophic forgetting having it’s effect when overfitted on the past data. If one cannot even properly learn past data but only frequently occuring patterns from it, those patterns might be too repetitively occuring to forget. But then, deep networks could do a lot better performance-wise by overfitting the dataset and exhaustively trying to remember the less-frequent patterns as well.
Here, the problem of catastrophic forgetting would not be on downstream learning tasks, it would be on updating this learnt representation to newer tasks.
Very fair, cool. Thanks, those five were nice illustrations, although I’ll need some time to digest the nature of non-linear dynamics. I’ve bookmarked it for an interesting trip someday.
In this case, I deferred it as I don’t understand what’s really going on in transparency work.
But more generally speaking: Ditto, I sortof believe this to a large degree. I was trying to highlight this point in Section ‘Application: Transparency’. I notice I’m significantly more pessimistic than the median person on AI alignment forum, so there are some cruxes which I cannot put my finger on. Could you elaborate a bit more on your thoughts?