They fit a simplistic model where the two variables were independent and the contribution of each decays exponentially. This leads to the shocking conclusion that the two inputs are independent and decay exponentially...
I mean the model is probably fine for it’s intended purpose; finding the rough optimal ratio of parameters and data for a given budget. It might mean that current models have suboptimal compute budgets. But it doesn’t imply anything beyond that, like some hard limit to scaling given our data supply.
If the big tech companies really want to train a giant model, but run out of data (unlikely)… well it may not be compute optimal, but there is nothing stopping them from doing multiple passes over the same data. If they even get to the point that it starts to overfit (unlikely), there’s a plethora of regularization methods to try.
I’m not sure what my exact thoughts were back then. I was/am at least skeptical of the specific formula used as it seems arbitrary. It is designed intentionally to have certain properties like exponentially diminishing returns. So it’s not exactly a “wild implication” that it has these properties.
This was over an unrelated disagreement elsewhere about whether Chinchilla’s predictions still held or made sense. As well as the plausibility of training tiny models to far greater performance.
First, the new parameters are wildly different than the old ones. Take that for what you will, but they are hardly set in stone. Second even with the best fit, the formula still doesn’t really match the shape of the observed curves. I think it’s just not the right curve.
As for reusing data I’ve seen sources claim reusing data up in language models to four times had no negative effect. And up to like 40 times was possible before it really stopped helping. I think LLMs currently do not use much regularization and other tricks that were done in other fields when data was limited. Those might push it further. If data became truly scarce, there may be other tricks to extend the data we have further. You also have all of the data from the people that talk to these things all day and upvote and downvote their responses. (I don’t think anyone has even tried making an AI that intentionally asks users questions about things it wants to learn more about, like a human would do.)
They fit a simplistic model where the two variables were independent and the contribution of each decays exponentially. This leads to the shocking conclusion that the two inputs are independent and decay exponentially...
I mean the model is probably fine for it’s intended purpose; finding the rough optimal ratio of parameters and data for a given budget. It might mean that current models have suboptimal compute budgets. But it doesn’t imply anything beyond that, like some hard limit to scaling given our data supply.
If the big tech companies really want to train a giant model, but run out of data (unlikely)… well it may not be compute optimal, but there is nothing stopping them from doing multiple passes over the same data. If they even get to the point that it starts to overfit (unlikely), there’s a plethora of regularization methods to try.
What specific claims in the post do you disagree with?
See this post for why multiple epochs will probably not work nearly as well as training on additional data.
I’m not sure what my exact thoughts were back then. I was/am at least skeptical of the specific formula used as it seems arbitrary. It is designed intentionally to have certain properties like exponentially diminishing returns. So it’s not exactly a “wild implication” that it has these properties.
I recently fit the Chinchilla formula to the data from the first LLaMA paper: https://i.imgur.com/u1Tm5EU.png
This was over an unrelated disagreement elsewhere about whether Chinchilla’s predictions still held or made sense. As well as the plausibility of training tiny models to far greater performance.
First, the new parameters are wildly different than the old ones. Take that for what you will, but they are hardly set in stone. Second even with the best fit, the formula still doesn’t really match the shape of the observed curves. I think it’s just not the right curve.
As for reusing data I’ve seen sources claim reusing data up in language models to four times had no negative effect. And up to like 40 times was possible before it really stopped helping. I think LLMs currently do not use much regularization and other tricks that were done in other fields when data was limited. Those might push it further.
If data became truly scarce, there may be other tricks to extend the data we have further. You also have all of the data from the people that talk to these things all day and upvote and downvote their responses. (I don’t think anyone has even tried making an AI that intentionally asks users questions about things it wants to learn more about, like a human would do.)