I expect there is still tons of low-hanging fruit available in LLM capabilities land. You could call this “algorithmic progress” if you want. This will decrease the compute cost necessary to get a given level of performance, thus raising the AI capability level accessible to less-resourced open-source AI projects.
Don’t you expect many of those improvements to remain closed-source from here on out, benefitting the teams that developed them at great (average) expense? And even the ones that are published freely will benefit the leaders just as much as their open-source chasers.
Don’t you expect many of those improvements to remain closed-source from here on out, benefitting the teams that developed them at great (average) expense? And even the ones that are published freely will benefit the leaders just as much as their open-source chasers.