I think you are underestimating the level of exception handling required to completely automate the average software engineers job, as happened to unskilled farmhands and factory workers. A slightly atypical few hours for a software engineers at the moment, as an example, might be discovering the logging facility stopped working on an important VM, SSHing in and figuring out what went wrong, and then applying a patch to another related piece of software to fix the bug. LLMs could help coach regular people through that process over the shoulder like a senior engineer, but they couldn’t automate the whole process, not because the individual pieces are too intellectually difficult but because it requires too much diverse and unsupervised tool use and investigation. If some AI successor to LLMs could be trusted to do that in the next few years, then we probably only have a short while until something FOOMs.
This is arguing against your point from your last reply above this. You said “more people migrate to software engineering from other jobs”. Your above reply contradicts that.
Hm, did I? I think if an over-the-shoulder senior engineer becomes a rounding error in terms of expenses then the solution is in fact to hire three times more engineers and pay them three times less. What do you think the implications of what I said are?
Because anything the AI cannot figure out on it’s own from the error, or logging in and requesting logs then opening them up (which can be trivially added to current gen AI), is not something a “junior” human engineer is likely to figure out.
Like other industries all the other times this happened, I instead expect 1⁄3 the number of engineers (for a given quantity of software) paid 3 times as much.
And because what you actually just described is from faulty architecture. A big reason why current systems are often so hard to debug and so “exception filled” is because they have trash designs. As in, they are so bad that a competent architect could trivially create a better one, but it costs so much money to rebuild a software product from scratch that the architecture becomes locked in, the technical debt permanent.
This all vanishes if AI “senior engineers” can churn out all new code to satisfy a new design, satisfying product level tests, in a few months.
I think you are underestimating the level of exception handling required to completely automate the average software engineers job, as happened to unskilled farmhands and factory workers. A slightly atypical few hours for a software engineers at the moment, as an example, might be discovering the logging facility stopped working on an important VM, SSHing in and figuring out what went wrong, and then applying a patch to another related piece of software to fix the bug. LLMs could help coach regular people through that process over the shoulder like a senior engineer, but they couldn’t automate the whole process, not because the individual pieces are too intellectually difficult but because it requires too much diverse and unsupervised tool use and investigation. If some AI successor to LLMs could be trusted to do that in the next few years, then we probably only have a short while until something FOOMs.
This is arguing against your point from your last reply above this. You said “more people migrate to software engineering from other jobs”. Your above reply contradicts that.
Hm, did I? I think if an over-the-shoulder senior engineer becomes a rounding error in terms of expenses then the solution is in fact to hire three times more engineers and pay them three times less. What do you think the implications of what I said are?
Because anything the AI cannot figure out on it’s own from the error, or logging in and requesting logs then opening them up (which can be trivially added to current gen AI), is not something a “junior” human engineer is likely to figure out.
Like other industries all the other times this happened, I instead expect 1⁄3 the number of engineers (for a given quantity of software) paid 3 times as much.
And because what you actually just described is from faulty architecture. A big reason why current systems are often so hard to debug and so “exception filled” is because they have trash designs. As in, they are so bad that a competent architect could trivially create a better one, but it costs so much money to rebuild a software product from scratch that the architecture becomes locked in, the technical debt permanent.
This all vanishes if AI “senior engineers” can churn out all new code to satisfy a new design, satisfying product level tests, in a few months.