Even superhuman AI programming agents may be unable to write computer programs to one-shot complex real-world modeling problems. If a solution to any of those real-world modeling problems is required to unlock the ability to build massively better or cheaper computing substrate, then explosive growth will quickly stop being bottlenecked on the ability to write better code and will instead be bottlenecked on something else. I think a similar thing holds true for ML research: certainly being smart is useful to humans, but a lot of progress is downstream of “dumb” investments slowly paying off over time (e.g. a factory that is built once for a high upfront cost and keeps churning out cars indefinitely afterwards for relatively low maintenance costs, a compute cluster which, once built, can be used to run many experiments).
If intelligence ends up not being the bottleneck, progress may slow down to the glacial pace dictated by Moore’s Law.
Current LLM coding agents are pretty bad at noticing that a new library exists to solve a problem in the first place, and at evaluating whether an unfamiliar library is fit for a given task.
As long as those things remain true, developers of new libraries wouldn’t be under much pressure in any direction, besides “pressure to make the LLM think their library is the newest canonical version of some familiar lib”.