I think I broadly agree on the model basics, though I suspect that if you can adjust for “market viability”, some of these are arguably much further ahead than others.
For example, different models have very different pricing, the APIs are gradually getting different features (i.e. prompt caching), the playgrounds are definitely getting different features. And these seem to be moving much more slowly to me.
I think it might be considerably easier to make a model ranked incredibly high than it is to make all the infrastructure for it to be scaled cheaply and for it to have strong APIs/UIs and such. I also assume there are significant aspects that the evals don’t show. For example, lots of people still find Claude 3.5 to be the best for many sorts of tasks. We’ve been using it with Squiggle AI, and with its good prompt caching, it still hasn’t been obviously surpassed (though I haven’t done much testing of models in the last month).
I think I broadly agree on the model basics, though I suspect that if you can adjust for “market viability”, some of these are arguably much further ahead than others.
For example, different models have very different pricing, the APIs are gradually getting different features (i.e. prompt caching), the playgrounds are definitely getting different features. And these seem to be moving much more slowly to me.
I think it might be considerably easier to make a model ranked incredibly high than it is to make all the infrastructure for it to be scaled cheaply and for it to have strong APIs/UIs and such. I also assume there are significant aspects that the evals don’t show. For example, lots of people still find Claude 3.5 to be the best for many sorts of tasks. We’ve been using it with Squiggle AI, and with its good prompt caching, it still hasn’t been obviously surpassed (though I haven’t done much testing of models in the last month).