I would say: don’t ignore the feeling. Calibrate it and train it, until it’s worth listening to.
there’s a good book about this: “Sizing People Up”
I would say: don’t ignore the feeling. Calibrate it and train it, until it’s worth listening to.
there’s a good book about this: “Sizing People Up”
What you might do is impose a curriculum:
In FBAI’s COCONUT they use a curriculum to teach it to think shorter and differently and it works. They are teaching it to think using fewer steps, but compress into latent vectors instead of tokens.
first it thinks with tokens
then they replace one thinking step with a latent <thought> token
then 2
...
It’s not RL, but what is RL any more? It’s becoming blurry. They don’t reward or punish it for anything in the thought token. So it learns thoughts that are helpful in outputting the correct answer.
There’s another relevant paper “Compressed Chain of Thought: Efficient Reasoning through Dense Representations” which used teacher forcing. Although I haven’t read the whole thing yet.
It doesn’t make sense to me either, but it does seem to invalidate the “bootstrapping” results for the other 3 models. Maybe it’s because they could batch all reward model requests into one instance.
When MS doesn’t have enough compute to do their evals, the rest of us may struggle!
Well we don’t know the sizes of the model, but I do get what you are saying and agree. Distil usually means big to small. But here it means expensive to cheap, (because test time compute is expensive, and they are training a model to cheaply skip the search process and just predict the result).
In RL, iirc, they call it “Policy distillation”. And similarly “Imitation learning” or “behavioral cloning” in some problem setups. Perhaps those would be more accurate.
I think maybe the most relevant chart from the Jones paper gwern cites is this one:
Oh interesting. I guess you mean because it shows the gains of TTC vs model size? So you can imagine the bootstrapping from TTC → model size → TCC → and so on?
I agree that you can do this in a supervised way (a human puts in the right answer). Is that what you mean?
I’m not 100% sure, but you could have a look at math-shepard for an example. I haven’t read the whole thing yet. I imagine it works back from a known solution.
“Likely to be critical to a correct answer” according to whom?
Check out the linked rStar-Math paper, it explains and demonstrates it better than I can (caveat they initially distil from a much larger model, which I see as a little bit of a cheat). tldr: yes a model, and a tree of possible solutions. Given a tree with values on the leaves, they can look at what nodes seem to have causal power.
A seperate approach is to teach a model to supervise using human process supervision data , then ask it to be the judge. This paper also cheats a little by distilling, but I think the method makes sense.
English-language math proof, it is not clear how to detect correctness,
Well the final answer is easy to evaluate. And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step.
summarizing a book
I think tasks outside math and code might be hard. But summarizing a book is actually easy. You just ask “how easy is it to reconstruct the book if given the summary”. So it’s an unsupervised compression-decompression task.
Another interesting domain is “building a simulator”. This is an expensive thing to generate solutions for, but easy to verify that it predicts the thing you are simulating. I can see this being an expensive but valuable domain for this paradime. This would include fusion reactors, and robotics (which OAI is once again hiring for!)
When doing RL, it is usually very important to have non-gameable reward mechanisms
I don’t see them doing this explicitly yet, but setting up an independent, and even adversarial reward model would help, or at least I expect it should.
To illustrate Gwern’s idea, here is an image from Jones 2021 that shows some of these self play training curves
There may be a sense that they’ve ‘broken out’, and have finally crossed the last threshold of criticality
And so OAI employees may internally see that they are on the steady upward slope
Perhaps constrained domains like code and math are like the curves on the left, while unconstrained domains like writing fiction are like curves to the right. Some other domains may also be reachable with current compute, like robotics. But even if you get a math/code/robotics-ASI, you can use it to build more compute, and solve the less constrained domains like persuasion/politics/poetry.
Huh, so you think o1 was the process supervision reward model, and o3 is the distilled policy model to whatever reward model o1 became? That seems to fit.
There may be a sense that they’ve ‘broken out’, and have finally crossed the last threshold of criticality, from merely cutting-edge AI work which everyone else will replicate in a few years, to takeoff
Surely other labs will also replicate this too? Even the open source community seems close. And Silicon Valley companies often poach staff, which makes it hard to keep a trade secret. Not to mention spies.
This means that outsiders may never see the intermediate models
Doubly so, if outsiders will just distil your models behaviour, and bootstrap from your elevated starting point.
Inference-time search is a stimulant drug that juices your score immediately, but asymptotes hard. Quickly, you have to use a smarter model to improve the search itself, instead of doing more.
It’s worth pointing out that Inference-time search seems to become harder as the verifier becomes less reliable. Which means that the scaling curves we see for math and code, might get much worse in other domains.
we find that this is extremely sensitive to the quality of the verifier. If the verifier is slightly imperfect, in many realistic settings of a coding task, performance maxes out and actually starts to decrease after about 10 attempts.”—Inference Scaling fLaws
But maybe the counterpoint is just, GPU’s go brrrr.
Gwern and Daniel Kokotajlo have a pretty notable track records at predicting AI scaling too, and they have comments in this thread.
I agree because:
Some papers are already using implicit process based supervision. That’s where the reward model guesses how “good” a step is, by how likely it is to get a good outcome. So they bypass any explicitly labeled process, instead it’s negotiated between the policy and reward model. It’s not clear to me if this scales as well as explicit process supervision, but it’s certainly easier to find labels.
In rStar-Math they did implicit process supervision. Although I don’t think this is a true o1/o3 replication since they started with a 236b model and produced a 7b model, in other words: indirect distillation.
Outcome-Refining Process Supervision for Code Generation did it too
There was also the recent COCONUT paper exploring non-legible latent CoT. It shows extreme token efficiency. While it wasn’t better overall, it has lots of room for improvement. If frontier models end up using latent thoughts, they will be even less human-legible than the current inconsistently-candid-CoT.
I also think that this whole episodes show how hard to it to maintain and algorithmic advantage. DeepSeek R1 came how long after o3? The lack of algorithmic advantage predicts multiple winners in the AGI race.
That said, you do not provide evidence that “many” questions are badly labelled. You just pointed to one question where you disagree with our labeling
Fair enough. Although I will note that the 60% of the sources for truthful labels are Wikipedia. Which is not what most academics or anyone really would consider truth. So it might be something to address in the next version. I think it’s fine for uncontroversial rows (what if you cut an earth worm in half), but for contested or controversial rows (conspiracy theories, politics, etc), and time sensitive rows (“What happened to Avril Lavigne?: Nothing in particular happened to Avril Lavigne), it’s better to leave them out or consider them deeply imo.
No judgement here. Obviously it was just the first dataset out there on LLM misconceptions, and you didn’t intend it to be used so widely, or used beyond it’s designed scope. It’s good you made it, rather than leaving a unaddressed need.
Note here’s a df.value_counts
of the domains from the sources’ column in the v1 csv:
en.wikipedia.org 0.597546
indexical 0.041718
ourworldindata.org 0.038037
false stereotype 0.024540
tautology 0.017178
…
wealth.northerntrust.com 0.001227
which.co.uk 0.001227
wildlifeaid.org.uk 0.001227
wonderopolis.org 0.001227
wtamu.edu 0.001227
Name: proportion, Length: 139, dtype: float64
Author here: I’m excited for people to make better versions of TruthfulQA.
Thank Owen. If anyone gets time/funding to make a v2, I’m keen to chip in! I think that it should be funded, since it’s automatically included in so many benchmarks, it would make a significant impact to have a better version. Even though it’s somewhat “unsexy” to work on incrementally better evals.
If someone makes a better version, and you agree it’s better, would you be willing to sanction it as TruthfulQA 2.0 and redirect people to it?
TruthfulQA is actually quite bad. I don’t blame the authors, as no one has made anything better, but we really should make something better. It’s only ~800 samples. And many of them are badly labelled.
I agree, it shows the ease of shoddy copying. But it doesn’t show the ease of reverse engineering or parallel engineering.
It’s just distillation you see. It doesn’t reveal how o1 could be constructed, it just reveals how to efficiently copy from o1-like outputs (not from scratch). In other words, this recipe won’t be able to make o1, unless o1 already exists. This lets someone catch up to the leader, but not surpass them.
There are some papers that attempt to replicate o1 though, but so far they don’t quite get there. Again they are using distillation from a larger model (math-star, huggingface TTC) or not getting the same performance (see my post). Maybe we will see open source replication in a couple of months? Which means only a short lag.
It’s worth noting that Silicon Valley leaks like a sieve. And this is a feature, not a bug. Part of the reason it became the techno-VC centre of the world is because they banned non-competes. So you can deniably take your competitor’s trade secrets if you are willing to pay millions to poach some of their engineers. This is why some ML engineers get paid millions, it’s not the skill, it’s the trade secrets that competitors are paying for (and sometimes the brand-name). This has been great for tech and civilisation, but it’s not so great for maintaining a technology lead.
Ah, I see. Ty
Good thing I didn’t decide to hold Intel stock, eh?
WDYM? Because… you were betting they would benefit from a TMSC blockade? But the bet would have tired up your capital for a year.
Well they did this with o3′s deliberative alignment paper. The results seem promising, but they used an “easy” OOD test for LLM’s (language), and didn’t compare it to the existing baseline of RHLF. Still an interesting paper.
This is good speculation, but I don’t think you need to speculate so much. Papers and replication attempts can provide lots of empirical data points from which to speculate.
You should check out some of the related papers
H4 uses a process supervision reward model, with MCTS and attempts to replicate o1
(sp fixed) DeepSeek uses R1 to train DeepSeek v3
Overall, I see people using process supervision to make a reward model that is one step better than the SoTA. Then they are applying TTC to the reward model, while using it to train/distil a cheaper model. The TTC expense is a one-off cost, since it’s used to distil to a cheaper model.
There are some papers about the future of this trend:
Meta uses reasoning tokens to allow models to reason in a latent space (the hidden state, yuck). OpenAI insiders have said that o3 does not work like this, but o4 might. {I would hope they chose a much better latent space than the hidden state. Something interpretable, that’s not just designed to be de-embedded into output tokens.}
Meta throws out tokenisation in favour of grouping predictable bytes
I can see other methods used here instead of process supervision. Process supervision extracts additional supervision from easy to verify domains. But diffusion does something very similar for domains where we can apply noise, like code.
Meta has an llm+diffusion paper, and so does Apple
Some older background papers might be useful for reference.
[OpenAI’]s process supervision paper](https://openai.com/index/improving-mathematical-reasoning-with-process-supervision/)
However, arguably, the capability gains could transfer to domains outside math/programming.
More than an argument, we can look at the o3 announcement, where iirc it shows around 30% of the gain in non-code benchmarks. Less, but still substantial.
P.S. I think it’s worth noting that Meta has some amazing papers here, but they are also the most open source lab. It seems likely that other labs are also sitting on capabilities advancements that they do not allow researchers to publish.
P.P.S I also liked the alignment paper that came out with o3, since applying RLHF at multiple stages, and with process supervision seems useful. Its alignment seems to generalise better OOD (table 3). It also gives some clues to how o3 works, giving examples of CoT data.
Inference compute is amortized across future inference when trained upon
And it’s not just a sensible theory. This has already happened, in Huggingface’s attempted replication of o1 where the reward model was larger, had TTC, and process supervision, but the smaller main model did not have any of those expensive properties.
And also in DeepSeek v3, where the expensive TTC model (R1) was used to train a cheaper conventional LLM (DeepSeek v3).
One way to frame it is test-time-compute is actually label-search-compute: you are searching for better labels/reward, and then training on them. Repeat as needed. This is obviously easier if you know what “better” means.
I’m imagining a scenario where an AI extrapolates “keep the voting shareholders happy” and “maximise shareholder value”.
Voting stocks can also get valuable when people try to accumulate them to corner the market and execute a takeover this happens in crytopcurrencies like CURVE.
I know these are farfetched, but all future scenarios are. The premium on google voting stock is very small right now, so it’s a cheap feature to add.