At the very least, you will still need a competent human to find bugs in the machine-generated code
...But why? I understand that LLMs produce code with bugs right now, but, why expect that they will continue to do that to an economically meaningful degree in the future? I don’t expect that.
For the same reason we don’t have self-driving cars yet: you cannot expect those systems to be perfectly reliable 100% of the time (well, actually you can, but I don’t expect such improvements in the near future just from scaling).
...But why? I understand that LLMs produce code with bugs right now, but, why expect that they will continue to do that to an economically meaningful degree in the future? I don’t expect that.
For the same reason we don’t have self-driving cars yet: you cannot expect those systems to be perfectly reliable 100% of the time (well, actually you can, but I don’t expect such improvements in the near future just from scaling).
Humans are much better drivers than they are programmers