I think they meant that as an analogy to how developed/sophisticated it was (ie they’re saying that it’s still early days for reasoning models and to expect rapid improvement), not that the underlying model size is similar.
IIRC OAers also said somewhere (doesn’t seem to be in the blog post, so maybe this was on Twitter?) that o1 or o1-preview was initialized from a GPT-4 (a GPT-4o?), so that would also rule out a literal parameter-size interpretation (unless OA has really brewed up some small models).
At the same meeting, company leadership gave a demonstration of a research project involving its GPT-4 AI model that OpenAI thinks showssome new skills that rise to human-like reasoning, according to a person familiar with the discussion who asked not to be identified because they were not authorized to speak to press.
I think they meant that as an analogy to how developed/sophisticated it was (ie they’re saying that it’s still early days for reasoning models and to expect rapid improvement), not that the underlying model size is similar.
IIRC OAers also said somewhere (doesn’t seem to be in the blog post, so maybe this was on Twitter?) that o1 or o1-preview was initialized from a GPT-4 (a GPT-4o?), so that would also rule out a literal parameter-size interpretation (unless OA has really brewed up some small models).
There was an article about it before the release.
https://archive.is/IwKSP
(Relevant, although “involving its GPT-4 AI model” is a considerably weaker statement than ‘initialized from a GPT-4 checkpoint’.)