Is it possible Meta just trained on bad data while Google and DeepSeek trained on good? See my two comments here: https://www.lesswrong.com/posts/Wnv739iQjkBrLbZnr/meta-releases-llama-4-herd-of-models?commentId=KkvDqZAuTwR7PCybB
Petropolitan
I’m afraid you might have missed the core thesis of my comment, let me reword. I’m arguing one should not extrapolate findings from that paper on what’s Meta training now.
The Llama 4 model card says the herd was trained on “[a] mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI”: https://github.com/meta-llama/llama-models/blob/main/models/llama4/MODEL_CARD.md To use a term from information theory, these posts probably have much lower factual density than curated web text in C4. There’s no public information how fast the loss goes down even on the first epoch of this kind of data let alone several ones.
I generated a slightly more structured write-up of my argument and edited it manually, hope it will be useful
Let’s break down the extrapolation challenge:
Scale Difference:
Muennighoff et al.: Studied unique data budgets up to 178 billion tokens and total processed tokens up to 900 billion. Their models were up to 9 billion parameters.
Llama 4 Behemoth: Reportedly trained on >30 trillion tokens (>30,000 billion). The model has 2 trillion total parameters (~288B active).
The Gap: We’re talking about extrapolating findings from a regime with ~170x fewer unique tokens (comparing 178B to 30T) and models ~30x smaller (active params). While scaling laws can be powerful, extrapolating across 2 orders of magnitude in data scale carries inherent risk. New phenomena or different decay rates for repeated data could emerge.
Data Composition and Quality:
Muennighoff et al.: Used C4 (filtered web crawl) and OSCAR (less filtered web crawl), plus Python code. They found filtering was more beneficial for the noisier OSCAR.
Llama 4 Behemoth: The >30T tokens includes a vast amount of web data, code, books, etc., but is also likely to contain a massive proportion of public Facebook and Instagram data.
The Issue: Social media data has different characteristics: shorter texts, different conversational styles, potentially more repetition/near-duplicates, different types of noise, and potentially lower factual density compared to curated web text or books. How the “value decay” of repeating this specific type of data behaves at the 30T scale is not something the 2023 paper could have directly measured.
Model Architecture:
Muennighoff et al.: Used dense Transformer models (GPT-2 architecture).
Llama 4 Behemoth: Is a Mixture-of-Experts (MoE) model.
The Issue: While MoE models are still Transformers, the way data interacts with specialized experts might differ from dense models when it comes to repetition. Does repeating data lead to faster overfitting within specific experts, or does the routing mechanism mitigate this differently? This interaction wasn’t studied in the 2023 paper.
Conclusion: directly applying the quantitative findings (e.g., “up to 4 epochs is fine”, RD* ≈ 15) to the Llama 4 Behemoth scale and potential data mix is highly speculative.
The massive scale difference is a big concern.
The potentially different nature and quality of the data (social media) could significantly alter the decay rate of repeated tokens.
MoE architecture adds another layer of uncertainty.
The “Data Wall” Concern: even if Meta could have repeated data based on the 2023 paper’s principles, they either chose not to (perhaps due to internal experiments showing it wasn’t effective at their scale/data mix) or they are hitting a wall where even 30T unique tokens isn’t enough for the performance leap expected from a 2T parameter compute-optimal model, and repeating isn’t closing the gap effectively enough.
P. S.
Also, check out https://www.reddit.com/r/LocalLLaMA, they are very disappointed how bad the released models turned out to be (yeah I know that’s not directly indicative of Behemoth performance)
Muennighoff et al. (2023) studied data-constrained scaling on C4 up to 178B tokens while Meta presumably included all the public Facebook and Instagram posts and comments. Even ignoring the two OOM difference and the architectural dissimilarity (e. g., some experts might overfit earlier than the research on dense models suggests, perhaps routing should take that into account), common sense strongly suggests that training twice on, say, a Wikipedia paragraph must be much more useful than training twice on posts by Instagram models and especially comments under those (which are often as like as two peas in a pod).
Since physics separated from natural philosophy in the times of Newton, it has almost always[1] progressed when new experimental data uncovered deficiencies in then-current understanding of the universe. During the Cold War unprecedentedly large amount of money were invested into experimental physics, and by the late 20th century all reasonably low hanging fruits have been picked (in the meantime the experiments have got absurdly expensive and difficult). I have also wrote on the topic at https://www.lesswrong.com/posts/CCnycGceT4HyDKDzK/a-history-of-the-future-2025-2040?commentId=KtusJZLAFDt4PW65R and the thread below, check it out.
As of the string theory in particular, it represents just one significant school of thought very popular in the US but other theories share the same problem of lacking the experimental data to test against.
Also, the body of knowledge in physics has become so large that local progress made here and there is not really visible in the grand scheme of things anymore even if it’s worth a Nobel Prize (while during the Second Industrial Revolution one discovery could, figuratively speaking, establish a new branch of science)
- ^
Two notable exceptions that, IMHO, kind of support the rule are Maxwell’s Equations and the General Relativity
- ^
I don’t think pure mathematics make a good parallel. There are still discoveries made by single mathematicians or very small research groups, but this haven’t really been the case in physics since about mid-20th century, when the US and USSR invested lots of money in modern large-scale research done by huge groups
Isn’t Polymarket already anonymous?
Not just long context in general (that can be partially mitigated with RAG or even BM25/tf-idf search), but also nearly 100% factual accuracy on it, as I argued last week
https://simple-bench.com presents an example of a similar benchmark with tricky commonsense questions (such as counting ice cubes in a frying pan on the stove) also with a pretty similar leaderboard. It is sponsored by Weights & Biases and devised by an author of a good YouTube channel who presents quite a balanced view on the topic there and don’t appear to have a conflict of interest either. See https://www.reddit.com/r/LocalLLaMA/comments/1ezks7m/simple_bench_from_ai_explained_youtuber_really for independent opinions on this benchmark
Two months later I tried to try actually implementing a nontrivial conversion of a natural language mathematical argument to a fully formalized Lean proof in order to check if I was indeed underestimating it (TBH, I have never tried a proof assistant before).
So I took a difficult integral from a recent MathSE question I couldn’t solve analytically myself, had Gemini 2.5 Pro solve it 0-shot,[1] verified it numerically, set up a Lean environment in Google Colab and then asked if another instance of Gemini 2.5 could convert the solution into a proof. It told me that it is indeed hard:
This is not a trivial task. Here’s why:
Informal vs. Formal: My natural language explanation, while aiming for clarity, likely skipped steps, relied on implicit assumptions (like function continuity, differentiability, domain constraints), or used intuitive leaps that Lean demands be made explicit and rigorously justified using defined theorems and axioms.
Library Navigation: Finding the exact theorems in mathlib4 that correspond to each step (e.g., the correct version of integration by parts, substitution, limit theorems, properties of specific functions) requires familiarity with the library.
Side Conditions: Every theorem in Lean (like the chain rule or integration by parts) has precise side conditions (e.g., f is differentiable, g’ is continuous, the function is integrable). The natural language proof might not have explicitly stated or verified all of these, but the Lean proof must.
Calculations: Even seemingly simple algebraic manipulations or derivative calculations need to be carried out using Lean’s tactics (ring, linarith, simp, rw, etc.) or proven step-by-step.
Proof Structure & Tactics: Structuring the proof correctly in Lean and using the appropriate tactics (apply, exact, calc, by, etc.) to guide the prover is a skill in itself.
My Limitations: While I can generate code snippets, generating a complete, correct, and non-trivial formal proof interactively is currently beyond my capabilities. It often requires a human expert to guide the process, debug errors, and find the right lemmas.
<...>
It is highly unlikely that I can produce a complete, automatically verifiable Lean proof for a “tricky” integral directly from a natural language description. However, if you provide the details, I can attempt to sketch out the Lean concepts involved, which could be a starting point for someone (perhaps you in Colab, with time and learning) to build the actual proof.
Gemini and I weren’t able to set up mathlib4 in Lean 4 and I gave up on the task, but already by just looking on a solution Gemini listed the following problems[2] (I put it here as a screen capture instead of a proper collapsible section because I couldn’t figure out how to copypaste the formulas right):
To sum up, yes, I did underestimate the hardness of the task, it is certainly beyond the reach of current SOTA LLMs.
However, I believe that since this type of task is verifiable in silico and really very convenient for synthetic training data generation, Google folks behind AlphaGeometry are probably going to solve this problem in a year or two.
- ^
The fact that an LLM solved it 0-shot is notable in its own right BTW. Generally, I’ld estimate that Gemini 2.5 and o3-mini are able to solve most of the definite integrals posted in MathSE questions. It was very different at the beginning of this year!
- ^
I haven’t checked accuracy of all the generated details due to lack of competence and time but generally expect the outline to be broadly correct
Aren’t you supposed as a reviewer to first give the authors a chance to write a rebuttal and discuss it with them before making your criticism public?
One of non-obvious but very important skills which all LLM-based SWE agents currently lack is reliably knowing which subtasks of a task you have successfully solved and which you have not. I think https://www.answer.ai/posts/2025-01-08-devin.html is a good case in point.
We have absolutely seen a lot of progress on driving down hallucinations on longer and longer contexts with model scaling, they probably made the charts above possible in the first place. However, recent research (e. g., the NoLiMa benchmark from last month https://arxiv.org/html/2502.05167v1) demonstrates that effective context length falls far short of what is advertised. I assume it’s not just my personal experience but common knowledge among the practitioners that hallucinations become worse the more text you feed to an LLM.
If I’m not mistaken even with all the optimizations and “efficient” transformer attempts we are still stuck (since GPT-2 at least) with self-attention + KV-cache[1] which scale (at inference) linearly as long as you haven’t run out of memory and quadratically afterwards. Sure, MLA have just massively ramped up the context length at which the latter happens but it’s not unlimited, you won’t be able to cache, say, one day of work (especially since DRAM has not been scaling exponentially for years https://semianalysis.substack.com/p/the-memory-wall).
People certainly will come up with ways to optimize long-context performance further, but it doesn’t have to continue scaling in the same way it has since 2019.
- ^
Originally known as “past cache” after the tensor name apparently coined by Thomas Wolf for the transformers library in February 2019, see commit ffd6238. The invention has not been described in the literature AFAIK, and it’s entirely possible (maybe even likely) that closed-source implementations of earlier decoder-only transformers used the same trick before this
- ^
To be honest, what I originally implied is that these founders develop their products with low-quality code, as cheap and dirty as they can, and without any long-term planning about further development
Perhaps says more about Y Combinator nowadays rather than about LLM coding
Aristotle has argued (and I support his view) in the beginning of the Book II of the Nicomachean Ethics that virtues are just like skills, they are acquired in life by practice and imitation of others. Perhaps not a coincidence that a philosophical article on the topic used “Reinforcement” in one of its subheadings. I also attach a 7-minute video for those who prefer a voice explanation:
For this reason, practice ethical behavior even with LLMs and you will enjoy doing the same with people
Another example is that going from the first in-principle demonstration of chain-of-thought to o1 took two years
The correct date for the first demonstration of CoT is actually ~July 2020, soon after the GPT-3 release, see the related work review here: https://ar5iv.labs.arxiv.org/html/2102.07350
When general readers see “empirical data bottlenecks” they expect something like a couple times better resolution or several times higher energy. But when physicists mention “wildly beyond limitations” they mean orders of magnitude more!
I looked up the actual numbers:
in this particular case we need to approach the Planck energy, which is eV, Wolfram Alpha readily suggests it’s ~540 kWh, 0.6 of energy use of a standard clothes dryer or 1.3 of energy in a typical lightning bolt; I also calculated it’s about 1.2 of the muzzle energy of the heaviest artillery piece in history, the 800-mm Schwerer Gustav;
LHC works in the eV range; 14 TeV, according to WA, can be compared to about an order of magnitude above the kinetic energy of a flying mosquito;
the highest energy observed in cosmic rays is eV or 50 J; for comparison, air and paintball guns muzzle energy is around 10 J while nail guns start from around 90 J.
So in this case we are looking at the difference between an unsafely powerful paintball marker and the most powerful artillery weapon humanity ever made (TBH I didn’t expect this last week, which is why I wrote “near-future”)
On the other hand, frontier math (pun intended) is much worse financed than biomedicine because most of the PhD-level math has barely any practical applications worth spending many manhours of high-IQ mathematicians (which often makes them switch career, you know). So, I would argue, if productivity of math postdocs when armed with future LLMs raises by, let’s say, an order of magnitude, they will be able to attack more laborious problems.
Not that I expect it to make much difference to the general populace or even the scientific community at large though
general relativity and quantum mechanics are unified with a new mathematical frame
The problem is not to invent a new mathematical frame, there are plenty already. The problem is we don’t have any experimental data whatsoever to choose between them because quantum gravity effects are expected to be relevant at energy scales wildly beyond current or near-future technological limitations. This has led to a situation where quantum gravity research has become largely detached from experimental physics, and AI can do nothing about that. Sabine Hossenfelder has made quite a few explainers (sometime quite angry ones) about it
- Apr 5, 2025, 9:13 PM; 6 points) 's comment on How much progress actually happens in theoretical physics? by (
The third scenario doesn’t actually require any replication of CUDA: if Amazon, Apple, AMD and other companies making ASICs commoditize inference but Nvidia retains its moat in training, with inference scaling and algorithmic efficiency improvements the training will inevitably become a much smaller portion of the market
Sell to who, competing cloud providers? Makes no sense, Lamborghini doesn’t sell their best engines to Ferrari or vice versa!
Also, all this discussion is missing that inference is much easier both hardware and software-wise than training while it was expected long time ago that at some point the market for the former will be comparable and then larger than for the latter