Researchers have had (and even published!) tons of ideas that looked promising for smaller tasks and smaller budgets but then failed to provide gains—or hurt more than they help—at larger scales, when combined with their existing stuff. That’s why frontier AI developers “prove out” new stuff in settings that are close to the one they actually care about. [1]
Here’s an excerpt from Dwarkesh’s interview with Sholto and Trenton, where they allude to this:
Sholto Douglas 00:40:32
So concretely, what does a day look like? I think the most important part to illustrate is this cycle of coming up with an idea, proving it out at different points in scale, and interpreting and understanding what goes wrong. I think most people would be surprised to learn just how much goes into interpreting and understanding what goes wrong.
People have long lists of ideas that they want to try. Not every idea that you think should work, will work. Trying to understand why that is is quite difficult and working out what exactly you need to do to interrogate it. So a lot of it is introspection about what’s going on. It’s not pumping out thousands and thousands and thousands of lines of code. It’s not the difficulty in coming up with ideas. Many people have a long list of ideas that they want to try, but paring that down and shot calling, under very imperfect information, what are the right ideas to explore further is really hard.
Dwarkesh Patel 00:41:32
What do you mean by imperfect information? Are these early experiments? What is the information?
Sholto Douglas 00:41:40
Demis mentioned this in his podcast. It’s like the GPT-4 paper where you have scaling law increments. You can see in the GPT-4 paper, they have a bunch of dots, right?
They say we can estimate the performance of our final model using all of these dots and there’s a nice curve that flows through them. And Demis mentioned that we do this process of scaling up.
Concretely, why is that imperfect information? It’s because you never actually know if the trend will hold. For certain architectures the trend has held really well. And for certain changes, it’s held really well. But that isn’t always the case. And things which can help at smaller scales can actually hurt at larger scales. You have to make guesses based on what the trend lines look like and based on your intuitive feeling of what’s actually something that’s going to matter, particularly for those which help with the small scale.
Dwarkesh Patel 00:42:35
That’s interesting to consider. For every chart you see in a release paper or technical report that shows that smooth curve, there’s a graveyard of first few runs and then it’s flat.
Sholto Douglas 00:42:45
Yeah. There’s all these other lines that go in different directions. You just tail off.
[…]
Sholto Douglas 00:51:13
So one of the strategic decisions that every pre-training team has to make is exactly what amount of compute do you allocate to different training runs, to your research program versus scaling the last best thing that you landed on. They’re all trying to arrive at an optimal point here. One of the reasons why you need to still keep training big models is that you get information there that you don’t get otherwise. So scale has all these emergent properties which you want to understand better.
Remember what I said before about not being sure what’s going to fall off the curve. If you keep doing research in this regime and keep on getting more and more compute efficient, you may have actually gone off the path to actually eventually scale. So you need to constantly be investing in doing big runs too, at the frontier of what you sort of expect to work.
[1] Unfortunately, not being a frontier AI company employee, I lack first-hand evidence and concrete numbers for this. But my guess would be that new algorithms used in training are typically proved out within 2 OOM of the final compute scale.
Yes, I think that what it takes to advance the AI capability frontier has changed significantly over time, and I expect this to continue. That said, I don’t think that existing algorithmic progress is irrelevant to powerful AI. The gains accumulate, even though we need increasing resources to keep them coming.
AFAICT, it is not unusual for productivity models to account for stuff like this. Jones (1995) includes it in his semi-endogenous growth model where, as useful innovations are accumulated, the rate at which each unit of R&D effort accumulates more is diminished. That paper claims that it was already known in the literature as a “fishing out” effect.