gwern
The pondering happens in earlier layers of the network, not in the output
Then how does it produce any tokens...?
then training on task Y could inadvertently bias the model to do more or less pondering on mostly-unrelated-but-statistically-correlated topic X.
But if that is what is going on and it accidentally learns to ponder initially due to bogus feedback or error, eventually the spurious correlation should be figured out by the model doing the pondering more, but it not increasing reward, and so it gets unlearned.
(Also, this assumes that RL gives an average reward of 0.0, which I don’t know if that’s true in practice.)
I think the mean would be taken out by the advantage estimation, so the RLHF continues to increase the probability of tokens being generated from the episodes with above-average reward, and punish the probability of generating the tokens from the below-average reward episodes. This is in effect as if the average reward is always 0.
What would be the implications? The model could develop a political bias to think more deeply about topics related to party X, where X is whatever party has more users giving the model positive feedback. Even if the other topics on party X’s agenda are never explicitly talked about (!)
That sounds like the pondering’s conclusions are then related to the task.
This idea could very well be wrong. The gradients may be weakened during backpropagation before they get to the unrelated ideas, because the ideas did not directly contribute to the task.
Under a straightforward RLHF using PPO, I think there wouldn’t be much weakening because the REINFORCE operator conceptually simply rewards (or punishes) all tokens generated during an episode, without making much attempt to decide which were ‘good’ or ‘bad’. (That’s why it’s so high variance.) Any advantage function trying to remove some of the variance probably won’t do a good job.
More problematically for your idea, if the conclusions are indeed ‘unrelated to the task’, then shouldn’t they be just as likely to arise in every episode—including the ones where it got negative reward? That would seem like it ought to exactly cancel out any learning of ‘pondering’.
You need some incentive somewhere to learn good ‘pondering’. (I have an example proposal for ‘free play’ which tries to teach a sort of ‘pondering’, but by stopping gradients, so anything learned in the initial steps is ‘free’, and so it can meta-learn to screw around and get something useful for free.)
Maybe it would look more random if you presented it segmented by token instead of translated into characters? I’m not familiar with the LLaMA tokenizations, but you seem to imply that a lot of the apparent patterns here are single tokens (like “partiellement” would be very surprising to me as the output of a greedy likelihood-minimizing sampling, but is trivial if it is a single BPE token). This would create a misleading impression of coherence.
Also, as Baginski notes, greedy sampling to minimize likelihood will not minimize total likelihood any more than greedy maximizing likelihood would maximize total likelihood. So it would be worth trying at least ‘worst-of-n’ sampling to see if it looks more like what you expect, in the same way that best-of-n often helps produce more expected LLM output. (After all, you would expect the tiniest logits to be the worst estimated of all logits, right? Full of sheer numerical noise and error, given that this is pushing ‘dark knowledge’ to its extremes. Who can really say much better or worse an answer, exactly, ‘衡’ is than ‘д’ when following ‘*’, etc? So if best-of-n can make such a qualitative difference when greedily sampling from the best-estimated logits..)
Note that text in pretraining may even be an expensive way to go about it: one of the most dramatic demonstrations MS gave us with Sydney was the incredible speed & efficiency of web-search-powered adversarial attacks on LLMs. You don’t need to dump a lot of samples onto the Internet and pray they make it into the training data and don’t get forgotten, if you can set up a single sample with good SEO and the LLM kindly retrieves it for you and attacks itself with your sample.
This is something to think about: it’s not just making it into the training data, it’s making it into the agent’s prompt or context that can matter. People are currently talking about how Deep Research is an example of the AI trend which will drive paywalls everywhere… which may happen, but consider the positives for people who don’t put up paywalls.
Why not just ‘valuable information’, in a Value of Information sense of ‘valuable’?
The estimate of the compute of their largest version ever (which is a very helpful way to phrase it) at only <=50x GPT-4 is quite relevant to many discussions (props to Nesov) and something Altman probably shouldn’t’ve said.
The estimate of test-time compute at 1000x effective-compute is confirmation of looser talk.
The scientific research part is of uncertain importance but we may well be referring back to this statement a year from now.
Apropos of very low-latency LLMs and revisiting this topic a little: what does this imply about DRL robotics, rather than animals? Will DRL NNs have to have brains as big as humans in order to run superhuman humanoid robots?
One possible implication is that Portia-like NNs are possible for robotics in general. Robotics may be quite ‘easy’ in that sense.
It is striking that when we look at NN parameter/FLOPS-counts, we generally do not see ‘large’ robotics, vision, or sound models, but LLMs; the largest pure-vision models like PaLI-X are <100b-parameters, the largest robotics are usually <10b, with Gato 1′s ~1b having been, if anything, unusually large because of all the other stuff it was doing. (I’m very behind on the robotics literature so maybe there are now much larger 100b-parameter models as they move into the ‘foundation model’ multi-modal/task scaling paradigm, but I’d bet that there still are none >1,000b.) Even sound/image/video generative models, which would be expected to be much larger than necessary for robotics tasks, are often small enough to run on a single consumer GPU, still. And these are usually trained with scaling laws now, so these are compute-optimal sizes and it is not just that they are wildly under-parameterized (the way almost all models were pre-2020).
So, if robotics is intrinsically easy, but animal brains do not show this because of their latency requirements, which forces them into misleadingly expensive brains, the implication is that we can do robotics by lifting the limitations of biological brains, like being forced to learn in realtime, in the real world, one animal at a time, without any sharing.
We should be able to train deep but small NNs in silico: turning all animal problems into Portia problems, if you will, pausing the simulation to let the NNs think & act for as long as necessary to plan the right action, and only then letting time flow to see what happens, and reset it to try again.
We remove all burdens of wallclock time or caloric consumption or childhood development, which are powerful general robotic controllers, and only then use these teacher-models to optimize low-latency controllers. The wider low-latency student models will be easier to train when they simply must imitate the teacher in a supervised-learning setting instead of RL from scratch, and so the size should be a lot better. (If nothing else, the student models can’t ‘die’ if they make a mistake like breaking a latency constraint, so this learning setting is way easier than an animal’s task.)
On a related note, it is also striking how far down in size LLMs can be pushed. You can get good reasoning out of tiny billion-parameter LLMs trained hard enough on high-quality-enough data, and the ‘densifying experience curve’ is steady and rapid (halving period of ~4 months), so we can expect that at some point we may have superhuman reasoning LLMs in the billion or sub-billion parameter range… which are just very, very ignorant, perhaps even more ignorant than you or me, of all the real-world knowledge & text that a proper LLM has. We can’t train those from scratch, but we can train trillion-parameter LLMs to suck in all the text in the world, and then exhale training data for small fast cheap models.
So it seems that Moravec’s Paradox remains undefeated: as difficult as we find the abstract intellectual capabilities like the process of doing math or reasoning, so difficult we struggle to even write them down to train LLMs on, so difficult to train on we need giant gigawatt datacenters to just get started, they are not intrinsically difficult and in the long run, do not require big expensive NNs.
But does that necessarily matter? Many of those models can’t use tools; and since much of the point of the end-to-end RL training of Deep Research is to teach tool use, showing DR results without tool use would be either irrelevant or misleading (eg. it might do worse than the original o3 model it is trained from, when deprived of the tools it is supposed to use).
Who right now is standing on the sidelines with a killer AI app that could rip up the market if only tokens were a bit cheaper?
OpenAI’s Deep Research is looking like something that could be big and they were standing on the sidelines in part because the tokens weren’t cheap.
Most people do not read many books or spend time in spaces where SAT vocab words would be used at all. If that was the sole determinant, you would then expect any vocab test to fail catastrophically and not predict/discriminate in most of the population (which would have downstream consequences like making SATs weirdly unreliable outside the elite colleges or much less predictive validity for low-performing demographics, the former of which I am unaware of being true and the latter of which I know is false); this would further have the surprising consequence that if a vocab test is, say, r = 0.5 with g while failing catastrophically on most of the population, it would have to be essentially perfectly correlated r = 1 in the remainder to even be arithmetically possible, which just punts the question: how did two book-readers come away from that book with non-overlapping vocabs...?
I have good vocabulary, e.g. 800 on GRE verbal, but feel like I have a pretty bad memory for words and terms that I’ve only seen a few times.
How could you possibly know something like that?
One benefit of his ‘no-nut January’ is that by cutting out peanuts entirely, he’s also avoiding problems from oxalates. I would expect powdered peanut butter to be as dangerous in that regard.
And yet, despite the SAT being so studied for, it remains a pretty good IQ test overall, and SAT-V or the GRE verbal parts OK. I think that’s because there are so many words (500k+ in English, and the GRE-V has no compunction about mining the obscurest just to f—with you), and you would have to study so many in order to meaningful inflate your scores (because after all, while there may be only a hundred ‘vocab words’ on any given SAT test, you don’t know which hundred). Let’s see… Here’s an interesting-looking reference: “How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s Age”, Brysbaert et al 2016
an average 20-year-old native speaker of American English knows 42,000 lemmas and 4,200 non-transparent multiword expressions, derived from 11,100 word families. The numbers range from 27,000 lemmas for the lowest 5% to 52,000 for the highest 5%. Between the ages of 20 and 60, the average person learns 6,000 extra lemmas or about one new lemma every 2 days.
So, if you wanted to boost your score from the mean to the 95th percentile, that seems to imply that you’d have to memorize 10,000 ‘lemmas’ (“Uninflected word from which all inflected words are derived”). That’s a big number, and then you have to ask how much work that would be.
If you did this in the optimal way with spaced repetition (ignoring the time it takes to figure out the 10k you want to memorize in the first place or the time to construct the flashcards or any penalty from needing to inefficiently cram them for an upcoming SAT instead of life-long efficient review), which of course still few students do, as spaced repetition systems remain a niche outside of medical school & foreign language study, the SuperMemo rough estimate is a long-term investment of 5 minutes per flashcard, and we’ll assume 1 lemma = 1 flashcard. That means you have to invest 10,00 * 5 = 50,000 minutes or 833 hours of studying! Meanwhile, hardly anyone is doing more than 8 hours of studying for the SAT as a whole (among the kids I knew at a prep high school, many didn’t even do a weekend course, which would entail about 8 hours of classwork & study). 833 hours for vocab alone would be insane.
That’s why people generally learn vocab from passive exposure rather than targeted study. Because no one, not even the most teacher’s-pet student, wants to do that. And so vocab measures keep working.
then I think it is also very questionable whether the AI that wins wars is the most “advanced” AI. / People like Dario whose bread-and-butter is model performance invariably over-index on model performance, especially on benchmarks. But practical value comes from things besides the model; what tasks you use it for and how effective you are at deploying it.
Dario is about the last AI CEO you should be making this criticism of. Claude has been notable for a while for the model which somehow winds up being the most useful and having the best ‘vibes’, even when the benchmarks indicate it’s #2 or #3; and meanwhile, it is the Chinese models which historically regress the most from their benchmarks when applied (and DeepSeek models, while not as bad as the rest, still do this and r1 is already looking shakier as people try out heldout problems or benchmarks).
Only if you ignore that yesterday was when the Trump GPU tariffs would also be leaking and, pace event-studies, be expected to be changing prices too.
It’s not RL, but what is RL any more? It’s becoming blurry. They don’t reward or punish it for anything in the thought token. So it learns thoughts that are helpful in outputting the correct answer.
That’s definitely RL (and what I was explaining was simply the obvious basic approach anyone in DRL would think of in this context and so of course there is research trying things like it). It’s being rewarded for a non-differentiable global loss where the correct alternative or answer or label is not provided (not even information of the existence of a better decision) and so standard supervised learning is impossible, requiring exploration. Conceptually, this is little different from, say, training a humanoid robot NN to reach a distant point in fewer actions: it can be a hard exploration problem (most sequences of joint torques or actions simply result in a robot having a seizure while laying on the ground going nowhere), where you want to eventually reach the minimal sequence (to minimize energy / wear-and-tear / time) and you start by solving the problem in any way possible, rewarding solely on the final success, and then reward-shape into a desirable answer, which in effect breaks up the hard original problem into two more feasible problems in a curriculum - ‘reach the target ever’ followed by ‘improve a target-reaching sequence of actions to be shorter’.
While we’re at it, one example I learned afterwards was that the ‘caribou randomization’ story is probably bogus (excerpts):
We will show that hunters do not randomize their behavior, that caribou populations do not fluctuate according to human predation, and that scapulimancy apparently is not selected because it is ecologically advantageous. We shall also show that there is no cross-cultural evidence of divinatory random devices producing randomized subsistence behavior, but rather that people manipulate divination with the explicit or implicit intervention of personal choice.
What is particularly interesting to me is that the apparent beautiful match of this traditional hunting practice with contemporary game theory may be ‘too good to be true’ because it was actually the opposite: I suspect that the story was made up to launder (secret) game-theoretic work from WWII into academic writing; the original author’s career & funder are exactly where that sort of submarine-warfare operations-research idea would come from… (There were many cases post-WWII of civilians carefully laundering war or classified work into publishable form, which means that any history-of-ideas has to be cautious about taking at face value anything published 1940–1960 which looks even a little bit like cryptography, chemistry, physics, statistics, computer science, game theory, or operations research.)
Outputs of o1 don’t include reasoning traces, so not particularly useful compared to outputs of chatbot models, and very expensive, so only a modest amount can be collected.
It would be more precise to say outputs of o1 aren’t supposed to include the reasoning traces. But in addition to the reasoning traces OA voluntarily released, people have been observing what seem to be leaks, and given that the history of LLM robustness to jailbreaks can be summarized as ‘nil’, it is at least conceivable that someone used a jailbreak+API to exfiltrate a bunch of traces. (Remember that Chinese companies like ByteDance have definitely been willfully abusing the OA API for the purposes of knowledge distillation/cloning and evading bans etc, in addition to a history of extremely cutthroat tactics that FANG would blanch at, so it’s a priori entirely plausible that they would do such things.)
I don’t believe DeepSeek has done so, but it is technically possible. (Regardless of whether anyone has done so, it is now partially moot given that r1 traces in the DS paper, and based on third party reports thus far, work so well for distillation so everyone can kickstart their own r1-clone with r1 reasoning traces and work from there. There may be more reason to try to exfiltrate o3+ traces, but OA may also decide to not bother, as users are claiming to value and/or enjoy reading the raw traces, and since the secret & capability is out, maybe there’s not much point in hiding them any longer.)
There is also GreaterWrong, which I believe caches everything rather than passing through live, so it would be able to restore almost all publicly-visible content, in theory.
Right now, it seems to be important to not restrict the transcripts at all. This is a hard exploration problem, where most of the answers are useless, and it takes a lot of time for correct answers to finally emerge. Given that, you need to keep the criteria as relaxed as possible, as they are already on the verge of impossibility.
The r1, the other guys, and OAers too on Twitter now seem to emphasize that the obvious appealing approach of rewarding tokens for predicted correctness or doing search on tokens, just doesn’t work (right now). You need to ‘let the LLMs yap’ until they reach the final correct answer. This appears to be the reason for the bizarre non sequiturs or multi-lingual diversions in transcripts—that’s just the cost of rolling out solution attempts which can go anywhere and keeping the winners. They will do all sorts of things which are unnecessary (and conversely, omit tokens which are ‘necessary’). Think of it as the equivalent of how DRL agents will ‘jitter’ and take many unnecessary actions, because those actions don’t change the final reward more than epsilon, and the RL feedback just isn’t rich enough to say ‘you don’t need to bounce up and down randomly while waiting for the ball to bounce back, that doesn’t actually help or hurt you’ (and if you try to reward-shape away those wasteful movements, you may discover your DRL agent converges to a local optimum where it doesn’t do anything, ever, because the jitters served to explore the environment and find new tricks, and you made it too expensive to try useless-seeming tricks so it never found any payoffs or laddered its way up in capabilities).
So you wouldn’t want to impose constraints like ‘must be 100% correct valid Lean proof’. Because it is hard enough to find a ‘correct’ transcript even when you don’t penalize it for spending a while yapping in Japanese or pseudo-skipping easy steps by not writing them down. If you imposed constraints like that, instead of rolling out 1000 episodes and getting 1 useful transcript and the bootstrap working, you’d get 0 useful transcripts and it’d go nowhere.
What you might do is impose a curriculum: solve it any way you can at first, then solve it the right way. Once you have your o1 bootstrap working and have seen large capability gains, you can go back and retrain on the easiest problems with stricter criteria, and work your way back up through the capability levels, but now in some superior way. (In the DRL agent context, you might train to convergence and only then impose a very, very small penalty on each movement, and gradually ramp it up until the performance degrades a little bit but it’s no longer jittering.) The same way you might be taught something informally, and then only much later, after you’ve worked with it a lot, do you go back and learn or prove it rigorously. You might impose a progressive shrinking constraint, for example, where the transcript has to be fewer tokens each time, in order to distill the knowledge into the forward passes to make it vastly cheaper to run (even cheaper, for hard problems, than simply training a small dumb model on the transcripts). You might try to iron out the irrelevancies and digressions by having a judge/critic LLM delete irrelevant parts. You might try to eliminate steganography by rewriting the entire transcript using a different model. Or you might simply prompt it to write a proof in Lean, and score it by whether the final answer validates.
“Overtraining” isn’t Chinchilla; Chinchilla is just “training”. The overtraining being advocated was supra-Chinchilla, with the logic that while you were going off the compute-optimal training, sure, you were more than making up for it by your compute-savings in the deployment phase, which the Chinchilla scaling laws do not address in any way. So there was a fad for training small models for a lot longer.