An OAI researcher assures me that the ‘missing details’ refers to using additional details during training to adjust to model details, but that the spec you see is the full final spec, and within time those details will get added to the final spec too.
OK great! Well then OpenAI should have no problem making it official, right? They should have no problem making some sort of official statement along the lines of what I suggested, right? Right? I think it’s important that someone badger them to clarify this in writing. That makes it harder for them to go back on it later, AND easier to get other companies to follow suit.
(Part of what I want them to make official is the commitment to do this transparency in the future. “What you see here is the full final spec” is basically useless if there’s not a commitment to keep it that way, otherwise next year they could start keeping the true spec secret without telling anyone, and not technically be doing anything wrong or violating any commitments.)
Oh, also: I think it’s also important that they be transparent about the spec of models that are deployed internally. For example, suppose they have a chatbot that’s deployed publicly on the API, but also a more powerful version that’s deployed internally to automate their R&D, their security, their lobbying, etc. It’s cold comfort if the external-model Spec says democracy and apple pie, if the internal-model spec says maximize shareholder value / act in the interest of OpenAI / etc.
OK great! Well then OpenAI should have no problem making it official, right? They should have no problem making some sort of official statement along the lines of what I suggested, right? Right? I think it’s important that someone badger them to clarify this in writing. That makes it harder for them to go back on it later, AND easier to get other companies to follow suit.
(Part of what I want them to make official is the commitment to do this transparency in the future. “What you see here is the full final spec” is basically useless if there’s not a commitment to keep it that way, otherwise next year they could start keeping the true spec secret without telling anyone, and not technically be doing anything wrong or violating any commitments.)
Oh, also: I think it’s also important that they be transparent about the spec of models that are deployed internally. For example, suppose they have a chatbot that’s deployed publicly on the API, but also a more powerful version that’s deployed internally to automate their R&D, their security, their lobbying, etc. It’s cold comfort if the external-model Spec says democracy and apple pie, if the internal-model spec says maximize shareholder value / act in the interest of OpenAI / etc.