it won’t make sense for the companies to release the ‘true’ level-5 models because of inference expense and speed.
Yes, not only that, but one does not want to show one’s true level to the competitors, and one does not want to let the competitors to study the model by poking at it via API.
And if a level-5 model is already a big help in AI R&D, one does not want to share it either, instead one wants to use it to get ahead in AI R&D race.
I can imagine a strategy of waiting till one has level-6 models for internal use before sharing full level-5 models.
And then there are safety and liability considerations. It’s not that internal use is completely 100% safe, but it’s way safer than when one exposes the API to world.
Also, it looks like we are getting AIs that are easy to make corrigible, and thus align them iteratively to DWIM goals, but that the AI models can’t be released to the public without restrictions, because it will still be able to be highly misused.
Yes, not only that, but one does not want to show one’s true level to the competitors, and one does not want to let the competitors to study the model by poking at it via API.
And if a level-5 model is already a big help in AI R&D, one does not want to share it either, instead one wants to use it to get ahead in AI R&D race.
I can imagine a strategy of waiting till one has level-6 models for internal use before sharing full level-5 models.
And then there are safety and liability considerations. It’s not that internal use is completely 100% safe, but it’s way safer than when one exposes the API to world.
Also, it looks like we are getting AIs that are easy to make corrigible, and thus align them iteratively to DWIM goals, but that the AI models can’t be released to the public without restrictions, because it will still be able to be highly misused.