“Writing correct algorithms in plain English” is not a common skill
Indeed it is not! But it is the one easier to automate than “create a requirement spec”. Here are plausible automation steps:
Get a feature request from a human, in plain English. (E.g. a mockup of the UI, or even “the app must take one-click payments from the shopping-cart screen”).
AI converts it into a series of screen flows etc. Repeat from step 1 until happy.
AI generates an internal test set. (Optionally reviewed by a human to see if they match the requirements. Go back to step 1 and adjust until happy.)
AI generates a mockup of all external APIs, matching the existing API specs.
AI generates the product (e.g. an app, or an ecosystem, or an FPGA, or whatever else).
AI validates the product against the test set and adjusts the product until the tests are passed.
At the very least, you will still need a competent human to find bugs in the machine-generated code...
I don’t think so, humans are worse at it than AI already. All you need is to close the feedback loop, and what I see online of the GPT-4 demos, giving an error message back to the AI already prompts it to correct the issue. These are of course syntax errors not semantic errors, but that is what the test suite is for, to obsolete the distinction between syntax and semantics, which the current LLMs are already pretty good at.
my best guess is that the software industry will hire less programmers rather than less competent programmers
Yes, and they will not be “programmers”, they will be “AI shepherds” or something.
I suspect that we are thinking about different use cases here.
For very standard things without complicated logic like an e-commerce app or showcase site, I can concede that an automated workflow could work without anyone ever looking at the code. This is (sort of) already possible without LLMs: there are several Full Site Editing apps already for building standard websites without looking at the code.
But suppose that your customer needs a program able to solve a complicated scheduling or routing problem tailored to some specific needs. Maybe our non-programmer knows the theoretical structure of routing problems and can direct the LLM to write the correct algorithms, but in this case it is definitely not an unskilled job (I suspect that <1% of the general population would be able to describe a routing problem in formal terms).
If our non-programmer is actually unskilled and has no clue about routing problems… what are we supposed to do? Throw vague specs at the AI and hope for the best?
If our non-programmer is actually unskilled and has no clue about routing problems… what are we supposed to do? Throw vague specs at the AI and hope for the best?
The person can… ask the AI about routing algorithms and related problems? Already now the bots are pretty good describing the current state of the field. And then come up with a workable approach interactively, before instructing the bot to spawn a specialized router app. That is to say, it will not be an unskilled job, it still requires someone who can learn, understand and make sensible decisions, which is in many ways harder than implementing a given algorithm. They just won’t be doing any “programming” as the term is understood now.
Indeed it is not! But it is the one easier to automate than “create a requirement spec”. Here are plausible automation steps:
Get a feature request from a human, in plain English. (E.g. a mockup of the UI, or even “the app must take one-click payments from the shopping-cart screen”).
AI converts it into a series of screen flows etc. Repeat from step 1 until happy.
AI generates an internal test set. (Optionally reviewed by a human to see if they match the requirements. Go back to step 1 and adjust until happy.)
AI generates a mockup of all external APIs, matching the existing API specs.
AI generates the product (e.g. an app, or an ecosystem, or an FPGA, or whatever else).
AI validates the product against the test set and adjusts the product until the tests are passed.
I don’t think so, humans are worse at it than AI already. All you need is to close the feedback loop, and what I see online of the GPT-4 demos, giving an error message back to the AI already prompts it to correct the issue. These are of course syntax errors not semantic errors, but that is what the test suite is for, to obsolete the distinction between syntax and semantics, which the current LLMs are already pretty good at.
Yes, and they will not be “programmers”, they will be “AI shepherds” or something.
I suspect that we are thinking about different use cases here.
For very standard things without complicated logic like an e-commerce app or showcase site, I can concede that an automated workflow could work without anyone ever looking at the code. This is (sort of) already possible without LLMs: there are several Full Site Editing apps already for building standard websites without looking at the code.
But suppose that your customer needs a program able to solve a complicated scheduling or routing problem tailored to some specific needs. Maybe our non-programmer knows the theoretical structure of routing problems and can direct the LLM to write the correct algorithms, but in this case it is definitely not an unskilled job (I suspect that <1% of the general population would be able to describe a routing problem in formal terms).
If our non-programmer is actually unskilled and has no clue about routing problems… what are we supposed to do? Throw vague specs at the AI and hope for the best?
The person can… ask the AI about routing algorithms and related problems? Already now the bots are pretty good describing the current state of the field. And then come up with a workable approach interactively, before instructing the bot to spawn a specialized router app. That is to say, it will not be an unskilled job, it still requires someone who can learn, understand and make sensible decisions, which is in many ways harder than implementing a given algorithm. They just won’t be doing any “programming” as the term is understood now.