That’s not exactly my claim. If he said more to the reporters than his words quoted in the article[1], then it might’ve been justified to interpret him as saying that pretraining is plateauing. The article isn’t clear on whether he said more. If he said nothing more, then the interpretation about plateauing doesn’t follow, but could in principle still be correct.
Another point is that Sutskever left OpenAI before they trained the first 100K H100s model, and in any case one datapoint of a single training run isn’t much evidence. The experiment that could convincingly demonstrate plateauing hasn’t been performed yet. Give it at least a few months, for multiple labs to try and fail.
“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said. “Scaling the right thing matters more now than ever.”
That’s not exactly my claim. If he said more to the reporters than his words quoted in the article[1], then it might’ve been justified to interpret him as saying that pretraining is plateauing. The article isn’t clear on whether he said more. If he said nothing more, then the interpretation about plateauing doesn’t follow, but could in principle still be correct.
Another point is that Sutskever left OpenAI before they trained the first 100K H100s model, and in any case one datapoint of a single training run isn’t much evidence. The experiment that could convincingly demonstrate plateauing hasn’t been performed yet. Give it at least a few months, for multiple labs to try and fail.
I definitely agree that people are overupdating too much from this training run, and we will need to wait.
(I also made this mistake in overupdating.)