Inasmuch as LLMs actually do lead to people creating new software, it’s mostly one-off trinkets/proofs of concept that nobody ends up using and which didn’t need to exist. But it still “feels” like your productivity has skyrocketed.
I’ve personally found that the task of “build UI mocks for the stakeholders”, which was previously ~10% of the time I spent on my job, has gotten probably 5x faster with LLMs. That said, the amount of time I spend doing that part of my job hasn’t really gone down, it’s just that the UI mocks are now a lot more detailed and interactive and go through more iterations, which IMO leads to considerably better products.
“This code will be thrown away” is not the same thing as “there is no benefit in causing this code to exist”.
The other notable area I’ve seen benefits is in finding answers to search-engine-proof questions—saying “I observe this error within task running on xyz stack, here is how I have kubernetes configured, what concrete steps can I take to debug such the system?”
But it’s probably like a 10-30% overall boost, plus flat cost reductions for starting in new domains and for some rare one-off projects like “do a trivial refactor”.
Sounds about right—“10-30% overall productivity boost, higher at the start of projects, lower for messy tangled legacy stuff” aligns with my observations, with the nuance that it’s not that I am 10-30% more effective at all of my tasks, but rather that I am many times more effective at a few of my tasks and have no notable gains on most of them.
And this is mostly where it’ll stay unless AGI labs actually crack long-horizon agency/innovations; i. e., basically until genuine AGI is actually there.
FWIW I think the bottleneck here is mostly context management rather than agency.
Yes to much of this. For small tasks or where I don’t have specialist knowledge I can get 10* speed increase—on average I would put 20%. Smart autocomplete like Cursor is undoubtably a speedup with no apparent downside. The LLM is still especially weak where I am doing data science or algorithm type work where you need to plot the results and look at the graph to know if you are making progress.
I’ve personally found that the task of “build UI mocks for the stakeholders”, which was previously ~10% of the time I spent on my job, has gotten probably 5x faster with LLMs. That said, the amount of time I spend doing that part of my job hasn’t really gone down, it’s just that the UI mocks are now a lot more detailed and interactive and go through more iterations, which IMO leads to considerably better products.
“This code will be thrown away” is not the same thing as “there is no benefit in causing this code to exist”.
The other notable area I’ve seen benefits is in finding answers to search-engine-proof questions—saying “I observe this error within task running on xyz stack, here is how I have kubernetes configured, what concrete steps can I take to debug such the system?”
Sounds about right—“10-30% overall productivity boost, higher at the start of projects, lower for messy tangled legacy stuff” aligns with my observations, with the nuance that it’s not that I am 10-30% more effective at all of my tasks, but rather that I am many times more effective at a few of my tasks and have no notable gains on most of them.
FWIW I think the bottleneck here is mostly context management rather than agency.
Yes to much of this. For small tasks or where I don’t have specialist knowledge I can get 10* speed increase—on average I would put 20%. Smart autocomplete like Cursor is undoubtably a speedup with no apparent downside. The LLM is still especially weak where I am doing data science or algorithm type work where you need to plot the results and look at the graph to know if you are making progress.