A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.
So what they are saying is that just sharing adopted recommendations on safety and security might itself be hazardous. And so they’ll share an update publicly, but that update would not necessarily disclose the full set of adopted recommendations.
OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.
What remains unclear is whether this is a “roughly GPT-5-level model”, or whether they already have a “GPT-5-level model” for their internal use and this is their first “post-GPT-5 model”.
Two subtle aspects of the latest OpenAI announcement, https://openai.com/index/openai-board-forms-safety-and-security-committee/.
So what they are saying is that just sharing adopted recommendations on safety and security might itself be hazardous. And so they’ll share an update publicly, but that update would not necessarily disclose the full set of adopted recommendations.
What remains unclear is whether this is a “roughly GPT-5-level model”, or whether they already have a “GPT-5-level model” for their internal use and this is their first “post-GPT-5 model”.