I think this misunderstands what discussion of “barriers to continued scaling” is all about. The question is whether we’ll continue to see ROI comparable to recent years by continuing to do the same things. If not, well… there is always, at all times, the possibility that we will figure out some new and different thing to do which will keep capabilities going. Many people have many hypotheses about what those new and different things could be: your guess about interaction is one, inference time compute is another, synthetic data is a third, deeply integrated multimodality is a fourth, and the list goes on. But these are all hypotheses which may or may not pan out, not already-proven strategies, which makes them a very different topic of discussion than the “barriers to continued scaling” of the things which people have already been doing.
This seems right to me, but the discussion of “scaling will plateau” feels like it usually comes bundled with “and the default expectation is that this means LLM-centric-AI will plateau”, which seems like the wrong-belief-to-have, to me.
I think this misunderstands what discussion of “barriers to continued scaling” is all about. The question is whether we’ll continue to see ROI comparable to recent years by continuing to do the same things. If not, well… there is always, at all times, the possibility that we will figure out some new and different thing to do which will keep capabilities going. Many people have many hypotheses about what those new and different things could be: your guess about interaction is one, inference time compute is another, synthetic data is a third, deeply integrated multimodality is a fourth, and the list goes on. But these are all hypotheses which may or may not pan out, not already-proven strategies, which makes them a very different topic of discussion than the “barriers to continued scaling” of the things which people have already been doing.
This seems right to me, but the discussion of “scaling will plateau” feels like it usually comes bundled with “and the default expectation is that this means LLM-centric-AI will plateau”, which seems like the wrong-belief-to-have, to me.