[Note: I use Copilot and like it. The ‘aha’ moment for me was when I needed to calculate the intersection of two lines, a thing that I would normally just copy/paste from Stack Overflow, and instead Copilot wrote the function for me. Of course I then wrote tests and it passed the tests, which seemed like an altogether better workflow.]
Language models are good enough at generating code to make the very engineers building such models slightly more productive
How much of this is ‘quality of code’ vs. ‘quality of data’? I would naively expect that the sort of algorithmic improvements generated from OpenAI engineers using Copilot/Codex/etc. are relatively low-impact compared to the sort of benefits you get from adding your company’s codebase to the corpus (or whatever is actually the appropriate version of that). I’m somewhat pessimistic about the benefits of adding Copilot-generated code to the corpus as a method of improving Copilot.
I buy that “generated code” will not add anything to the training set, and that Copilot doesn’t help for having good data or (directly) better algorithms. However, the feedback loop I am pointing at is when you accept suggestions on Copilot. I think it is learning from human feedback on what solutions people select. If the model is “finetuned” to the specific dev’s coding style, I would expect Codex to suggest even better code (because of high quality of finetuning data) to someone at OAI than me or you.
How much of this is ‘quality of code’ vs. ‘quality of data’?
I’m pointing at overall gains in dev’s productivity. This could be used for collecting more data, which, AFAIK, happens by collecting automatically data from the internet using code (although possibly the business collaboration between OAI and github helped). Most of the dev work would then be iteratively cleaning that data, running trainings, changing the architecture, etc. before getting to the performance they’d want, and those cycles would be a tiny bit faster using such tools.
To be clear, I’m not saying that talented engineers are coding much faster today. They’re probably doing creative work at the edge of what Codex has seen. However, we’re using the first version of something that, down the line, might end up giving us decent speed increases (I’ve been increasingly more productive the more I’ve learned how to use it). A company owning such model would certainly have private access to better versions to use internally, and there are some strategic considerations in not sharing the next version of its code generating model to win a race, while collecting feedback from millions of developers.
[Note: I use Copilot and like it. The ‘aha’ moment for me was when I needed to calculate the intersection of two lines, a thing that I would normally just copy/paste from Stack Overflow, and instead Copilot wrote the function for me. Of course I then wrote tests and it passed the tests, which seemed like an altogether better workflow.]
How much of this is ‘quality of code’ vs. ‘quality of data’? I would naively expect that the sort of algorithmic improvements generated from OpenAI engineers using Copilot/Codex/etc. are relatively low-impact compared to the sort of benefits you get from adding your company’s codebase to the corpus (or whatever is actually the appropriate version of that). I’m somewhat pessimistic about the benefits of adding Copilot-generated code to the corpus as a method of improving Copilot.
I buy that “generated code” will not add anything to the training set, and that Copilot doesn’t help for having good data or (directly) better algorithms. However, the feedback loop I am pointing at is when you accept suggestions on Copilot. I think it is learning from human feedback on what solutions people select. If the model is “finetuned” to the specific dev’s coding style, I would expect Codex to suggest even better code (because of high quality of finetuning data) to someone at OAI than me or you.
I’m pointing at overall gains in dev’s productivity. This could be used for collecting more data, which, AFAIK, happens by collecting automatically data from the internet using code (although possibly the business collaboration between OAI and github helped). Most of the dev work would then be iteratively cleaning that data, running trainings, changing the architecture, etc. before getting to the performance they’d want, and those cycles would be a tiny bit faster using such tools.
To be clear, I’m not saying that talented engineers are coding much faster today. They’re probably doing creative work at the edge of what Codex has seen. However, we’re using the first version of something that, down the line, might end up giving us decent speed increases (I’ve been increasingly more productive the more I’ve learned how to use it). A company owning such model would certainly have private access to better versions to use internally, and there are some strategic considerations in not sharing the next version of its code generating model to win a race, while collecting feedback from millions of developers.