My strong guess is that OpenAI’s results are real, it would really surprise me if they were literally cheating on the benchmarks. It looks like they are just using much more inference-time compute than is available to any outside user, and they use a clever scaffold that makes the model productively utilize the extra inference time. Elliot Glazer (creator of FrontierMath) says in a comment on my recent post on FrontierMath:
A quick comment: the o3 and o3-mini announcements each have two significantly different scores, one ⇐ 10%, the other >= 25%. Our own eval of o3-mini (high) got a score of 11% (it’s on Epoch’s Benchmarking Hub). We don’t actually know what the higher scores mean, could be some combination of extreme compute, tool use, scaffolding, majority vote, etc., but we’re pretty sure there is no publicly accessible way to get that level of performance out of the model, and certainly not performance capable of “crushing IMO problems.”
I do have the reasoning traces from the high-scoring o3-mini run. They’re extremely long, and one of the ways it leverages the higher resources is to engage in an internal dialogue where it does a pretty good job of catching its own errors/hallucinations and backtracking until it finds a path to a solution it’s confident in. I’m still writing up my analysis of the traces and surveying the authors for their opinions on the traces, and will also update e.g. my IMO predictions with what I’ve learned.
My strong guess is that OpenAI’s results are real, it would really surprise me if they were literally cheating on the benchmarks. It looks like they are just using much more inference-time compute than is available to any outside user, and they use a clever scaffold that makes the model productively utilize the extra inference time. Elliot Glazer (creator of FrontierMath) says in a comment on my recent post on FrontierMath: