Personally I believe that a novel algorithm/​architecture which is substantially better than transformer-based LLMs is findable, and would show up even at small scale.
I think the effect you are discussing is more of an issue for incremental improvements on the existing paradigm.
Personally I believe that a novel algorithm/​architecture which is substantially better than transformer-based LLMs is findable, and would show up even at small scale. I think the effect you are discussing is more of an issue for incremental improvements on the existing paradigm.
My point is that people can perceive difficulties with getting incremental improvement as strong evidence about LLMs being generally limited.