A somewhat reliable source has told me that they don’t have the compute infrastructure to support making a more advanced model available to users.
That might also reflect limited engineering efforts to optimize state-of-the-art models for real world usage (think of the performance gains from GPT-3.5 Turbo) as opposed to hitting benchmarks for a paper to be published.
A somewhat reliable source has told me that they don’t have the compute infrastructure to support making a more advanced model available to users.
That might also reflect limited engineering efforts to optimize state-of-the-art models for real world usage (think of the performance gains from GPT-3.5 Turbo) as opposed to hitting benchmarks for a paper to be published.