But the outside view on LLM hitting a wall and being a “stochastic parrot” is true? GPT4O has been weaker and cheaper than GPT4T in my experience, and the same is true w.r.t. GPT4T vs. GPT4. The two versions of GPT4 seem about the same. Opus is a bit stronger than GPT4, but not by much and not in every topic. Both Opus and GPT4 exhibit patterns of being a stochastic autocompleter, and not a logician. (Humans aren’t that much better, of course. People are terrible at even trivial math. Logic and creativity are difficult.) DallE etc. don’t really have an artistic sense, and still need prompt engineering to produce beautiful art. Gemini 1.5 Pro is even weaker than GPT4, and I’ve heard Gemini Ultra has been retired from public access. All of these models get worse as their context grows, and their grasp of long range dependencies is terrible.
The pace is of course still not too bad compared with other technologies, but there doesn’t seem to be any long-context “Q*” GPT5s in store, from any company.
PS: Does lmsys do anything to control for the speed effect? GPT4O is very fast, and that alone should be responsible for many ELOs.
And you’re probably misjudging it for text only outputs. If you watched the demos, there was considerable additional signal in the vocalizations. It looks like maybe there’s very deep integration of SSML.
One of the ways you can bypass the failures of word problem variation errors in older text-only models was token replacement with symbolic representations. In general, we’re probably at the point of complexity where breaking from training data similarity in tokens vs having prompts match context in concepts (like in this paper) is going to lead to significantly improved expressed performance.
I would strongly suggest not evaluating GPT-4o’s overall performance in text only mode without the SSML markup added.
Opus is great, I like that model a lot. But in general I think most of the people looking at this right now are too focused on what’s happening with the networks themselves and not focused enough on what’s happening with the data, particularly around clustering of features across multiple dimensions of the vector space. SAE is clearly picking up only a small sample and even then isn’t cleanly discovering precisely what’s represented.
I’d wait to see what ends up happening with things like CoT in SSML synthetic data.
The current Gemini search summarization failures as well as an unexpected result the other week with humans around a theory of mind variation suggests to me that the more models are leaning into effectively surface statistics for token similarity vs completion based on feature clustering is holding back performance and that cutting through the similarity with formatting differences will lead to a performance leap. This may even be part of why models will frequently be able to get a problem right as a code expression than as a direct answer.
So even if GPT-5 doesn’t arrive, I’d happily bet that we see a very noticable improvement over the next six months, and that’s not even accounting for additional efficiency in prompt techniques. But all this said, I’d also be surprised if we don’t at least see GPT-5 announced by that point.
P.S. Lmsys is arguably the best leaderboard to evaluate real world usage, but it still inherently reflects a sampling bias around what people who visit lmsys ask of models as well as the ways in which they do so. I wouldn’t extrapolate relative performance too far, particularly when minor.
But the outside view on LLM hitting a wall and being a “stochastic parrot” is true? GPT4O has been weaker and cheaper than GPT4T in my experience, and the same is true w.r.t. GPT4T vs. GPT4. The two versions of GPT4 seem about the same. Opus is a bit stronger than GPT4, but not by much and not in every topic. Both Opus and GPT4 exhibit patterns of being a stochastic autocompleter, and not a logician. (Humans aren’t that much better, of course. People are terrible at even trivial math. Logic and creativity are difficult.) DallE etc. don’t really have an artistic sense, and still need prompt engineering to produce beautiful art. Gemini 1.5 Pro is even weaker than GPT4, and I’ve heard Gemini Ultra has been retired from public access. All of these models get worse as their context grows, and their grasp of long range dependencies is terrible.
The pace is of course still not too bad compared with other technologies, but there doesn’t seem to be any long-context “Q*” GPT5s in store, from any company.
PS: Does lmsys do anything to control for the speed effect? GPT4O is very fast, and that alone should be responsible for many ELOs.
GPT-4o is literally cheaper.
And you’re probably misjudging it for text only outputs. If you watched the demos, there was considerable additional signal in the vocalizations. It looks like maybe there’s very deep integration of SSML.
One of the ways you can bypass the failures of word problem variation errors in older text-only models was token replacement with symbolic representations. In general, we’re probably at the point of complexity where breaking from training data similarity in tokens vs having prompts match context in concepts (like in this paper) is going to lead to significantly improved expressed performance.
I would strongly suggest not evaluating GPT-4o’s overall performance in text only mode without the SSML markup added.
Opus is great, I like that model a lot. But in general I think most of the people looking at this right now are too focused on what’s happening with the networks themselves and not focused enough on what’s happening with the data, particularly around clustering of features across multiple dimensions of the vector space. SAE is clearly picking up only a small sample and even then isn’t cleanly discovering precisely what’s represented.
I’d wait to see what ends up happening with things like CoT in SSML synthetic data.
The current Gemini search summarization failures as well as an unexpected result the other week with humans around a theory of mind variation suggests to me that the more models are leaning into effectively surface statistics for token similarity vs completion based on feature clustering is holding back performance and that cutting through the similarity with formatting differences will lead to a performance leap. This may even be part of why models will frequently be able to get a problem right as a code expression than as a direct answer.
So even if GPT-5 doesn’t arrive, I’d happily bet that we see a very noticable improvement over the next six months, and that’s not even accounting for additional efficiency in prompt techniques. But all this said, I’d also be surprised if we don’t at least see GPT-5 announced by that point.
P.S. Lmsys is arguably the best leaderboard to evaluate real world usage, but it still inherently reflects a sampling bias around what people who visit lmsys ask of models as well as the ways in which they do so. I wouldn’t extrapolate relative performance too far, particularly when minor.