No links, because no one in all of existence currently understands what the heck is going on inside of LLMs—which, of course, is just another way of saying that it’s pretty unreasonable to assign a high probability to your personal guesses about what the thing that LLMs do—whether you call that “predicting the most probable next word” or “reasoning about the world”—will or will not scale to.
Which, itself, is just a rephrase of the classic rationalist question: what do you think you know, and why do you think you know it?
(For what it’s worth, by the way, I actually share your intuition that current LLM architectures lack some crucial features that are necessary for “true” general intelligence. But this intuition isn’t very strongly held, considering how many times LLM progress has managed to surprise me already.)
No links, because no one in all of existence currently understands what the heck is going on inside of LLMs—which, of course, is just another way of saying that it’s pretty unreasonable to assign a high probability to your personal guesses about what the thing that LLMs do—whether you call that “predicting the most probable next word” or “reasoning about the world”—will or will not scale to.
Which, itself, is just a rephrase of the classic rationalist question: what do you think you know, and why do you think you know it?
(For what it’s worth, by the way, I actually share your intuition that current LLM architectures lack some crucial features that are necessary for “true” general intelligence. But this intuition isn’t very strongly held, considering how many times LLM progress has managed to surprise me already.)