Hey everyone, could you spell out to me what’s the issue here? I read a lot of comments that basically assume “x and y are really bad” but never spell it out. So, is the problem that:
- Giving the benchmark to OpenAI helps capabilities (but don’t they have a vast sea of hard problems to already train models on?)
- OpenAI could fake o3′s capabilities (why do you care so much? This would slow down AI progress, not accelerate it)
Really high quality high-difficulty benchmarks are much more scarce and important for capabilities advancing than just training data. Having an apparently x-risk focused org do a benchmark implying it’s for evaluating danger from highly capable models in a way which the capabilities orgs can’t use to test their models, then having it turn out that’s secretly funded by OpenAI with OpenAI getting access to most of the data is very sketchy.
Some people who contributed questions likely thought they would be reducing x-risk by helping build bright line warning signs. Their work being available to OpenAI will mostly have increased x-risk by giving the capabilities people an unusually important number-goes-up to optimize for, bringing timelines to dangerous systems closer. That’s a betrayal of trust, and Epoch should do some serious soul searching about taking money to do harmful things.
If the funding didn’t come from OpenAI, would OpenAI still be able to use that benchmark? Like, I’d imagine Epoch would still use that to evaluate where current models are at. I think this might be my point of confusion. Maybe the answer is “not as much for it to be as useful to them”?
Evaluation on demand because they can run them intensely lets them test small models for architecture improvements. This is where the vast majority of the capability gain is.
Getting an evaluation of each final model is going to be way less useful for the research cycle, as it only gives a final score, not a metric which is part of the feedback loop.
Yes, that answer matches my understanding of the concern. If the vast majority of the dataset was private to Epoch, OpenAI they could occasionally submit their solution (probably via API) to Epoch to grade, but wouldn’t be able to use the dataset with high frequency as evaluation in many experiments.
This is assuming that companies won’t fish out the data from API logs anyway, which the OP asserts but I think is unclear.
Also if they have access to the mathematicians’ reasoning in addition to final answers, this could potentially be valuable without directly training on it (e.g. maybe they could use to evaluate process-based grading approaches).
(FWIW I’m explaining the negatives, but I disagree with the comment I’m expanding on regarding the sign of Frontier Math, seems positive EV to me despite the concerns)
I’m guessing you view having better understanding of what’s coming as very high value, enough that burning some runway is acceptable? I could see that model (though put <15% on it), but I think this is at least not good integrity wise to have put on the appearance of doing just the good for x-risk part and not sharing it as an optimizable benchmark, while being funded by and giving the data to people who will use it for capability advancements.
Wanted to write a more thoughtful reply to this, but basically yes, my best guess is that the benefits of informing the world are in expectation bigger than the negatives from acceleration. A potentially important background views is that I think takeoff speeds matter more than timelines, and it’s unclear to me how having FrontierMath affects takeoff speeds.
I wasn’t thinking much about the optics, but I’d guess that’s not a large effect. I agree that Epoch made a mistake here though and this is a negative.
I could imagine changing my mind somewhat easily,.
Agree that takeoff speeds are more important, and expect that FrontierMath has much less affect on takeoff speed. Still think timelines matter enough that the amount of relevantly informing people that you buy from this is likely not worth the cost, especially if the org is avoiding talking about risks in public and leadership isn’t focused on agentic takeover, so the info is not packaged with the info needed for that info to have the effects which would help.
In addition to the object level reasons mentioned by plex, misleading people about the nature of a benchmark is a problem because it is dishonest. Having an agreement to keep this secret indicates that the deception was more likely intentional on OpenAI’s part.
Hey everyone, could you spell out to me what’s the issue here? I read a lot of comments that basically assume “x and y are really bad” but never spell it out. So, is the problem that:
- Giving the benchmark to OpenAI helps capabilities (but don’t they have a vast sea of hard problems to already train models on?)
- OpenAI could fake o3′s capabilities (why do you care so much? This would slow down AI progress, not accelerate it)
- Some other thing I’m not seeing?
Really high quality high-difficulty benchmarks are much more scarce and important for capabilities advancing than just training data. Having an apparently x-risk focused org do a benchmark implying it’s for evaluating danger from highly capable models in a way which the capabilities orgs can’t use to test their models, then having it turn out that’s secretly funded by OpenAI with OpenAI getting access to most of the data is very sketchy.
Some people who contributed questions likely thought they would be reducing x-risk by helping build bright line warning signs. Their work being available to OpenAI will mostly have increased x-risk by giving the capabilities people an unusually important number-goes-up to optimize for, bringing timelines to dangerous systems closer. That’s a betrayal of trust, and Epoch should do some serious soul searching about taking money to do harmful things.
If the funding didn’t come from OpenAI, would OpenAI still be able to use that benchmark? Like, I’d imagine Epoch would still use that to evaluate where current models are at. I think this might be my point of confusion. Maybe the answer is “not as much for it to be as useful to them”?
Evaluation on demand because they can run them intensely lets them test small models for architecture improvements. This is where the vast majority of the capability gain is.
Getting an evaluation of each final model is going to be way less useful for the research cycle, as it only gives a final score, not a metric which is part of the feedback loop.
Yes, that answer matches my understanding of the concern. If the vast majority of the dataset was private to Epoch, OpenAI they could occasionally submit their solution (probably via API) to Epoch to grade, but wouldn’t be able to use the dataset with high frequency as evaluation in many experiments.
This is assuming that companies won’t fish out the data from API logs anyway, which the OP asserts but I think is unclear.
Also if they have access to the mathematicians’ reasoning in addition to final answers, this could potentially be valuable without directly training on it (e.g. maybe they could use to evaluate process-based grading approaches).
(FWIW I’m explaining the negatives, but I disagree with the comment I’m expanding on regarding the sign of Frontier Math, seems positive EV to me despite the concerns)
I’m guessing you view having better understanding of what’s coming as very high value, enough that burning some runway is acceptable? I could see that model (though put <15% on it), but I think this is at least not good integrity wise to have put on the appearance of doing just the good for x-risk part and not sharing it as an optimizable benchmark, while being funded by and giving the data to people who will use it for capability advancements.
Wanted to write a more thoughtful reply to this, but basically yes, my best guess is that the benefits of informing the world are in expectation bigger than the negatives from acceleration. A potentially important background views is that I think takeoff speeds matter more than timelines, and it’s unclear to me how having FrontierMath affects takeoff speeds.
I wasn’t thinking much about the optics, but I’d guess that’s not a large effect. I agree that Epoch made a mistake here though and this is a negative.
I could imagine changing my mind somewhat easily,.
Agree that takeoff speeds are more important, and expect that FrontierMath has much less affect on takeoff speed. Still think timelines matter enough that the amount of relevantly informing people that you buy from this is likely not worth the cost, especially if the org is avoiding talking about risks in public and leadership isn’t focused on agentic takeover, so the info is not packaged with the info needed for that info to have the effects which would help.
Evaluating the final model tells you where you got to. Evaluating many small models and checkpoints helps you get further faster.
In addition to the object level reasons mentioned by plex, misleading people about the nature of a benchmark is a problem because it is dishonest. Having an agreement to keep this secret indicates that the deception was more likely intentional on OpenAI’s part.