I think the big implication for now is that the scaling hypothesis for LLMs, at least if we require them to be bounded scaling, is probably false for far more scaling effort than we realized, and this extends AI timelines by quite a bit.
I think the big implication for now is that the scaling hypothesis for LLMs, at least if we require them to be bounded scaling, is probably false for far more scaling effort than we realized, and this extends AI timelines by quite a bit.