A quick comment: the o3 and o3-mini announcements each have two significantly different scores, one ⇐ 10%, the other >= 25%. Our own eval of o3-mini (high) got a score of 11% (it’s on Epoch’s Benchmarking Hub). We don’t actually know what the higher scores mean, could be some combination of extreme compute, tool use, scaffolding, majority vote, etc., but we’re pretty sure there is no publicly accessible way to get that level of performance out of the model, and certainly not performance capable of “crushing IMO problems.”
I do have the reasoning traces from the high-scoring o3-mini run. They’re extremely long, and one of the ways it leverages the higher resources is to engage in an internal dialogue where it does a pretty good job of catching its own errors/hallucinations and backtracking until it finds a path to a solution it’s confident in. I’m still writing up my analysis of the traces and surveying the authors for their opinions on the traces, and will also update e.g. my IMO predictions with what I’ve learned.
Yes, the privacy constraints make the implications of these improvements less legible to the public. We have multiple plans for how to disseminate info within this constraint, such as publishing author survey comments regarding the reasoning traces and our competition at the end of the month to establish a sort of human baseline.
Still, I don’t know that the privacy of FrontierMath is worth all the roundabout efforts we must engage in to explain it. For future projects, I would be interested in other approaches to balancing preventing models from training on public discussion of problems vs being able to clearly show the world what the models are tackling. Maybe it would be feasible to do IMO-style releases? “Here’s 30 new problems we collected this month. We will immediately test all the models and then make the problems public.”