The problem with arguing against that claim is that nobody knows whether transformers/scaling language models are sufficient for full code automation. To take your nootropics example, an analogy would be if nootropics were legal, did not have negative side effects, with a single company giving “beta access” (for now) to a new nootropic in unlimited amount at no cost to a market of tens of millions of users, that the data from using this nootropic was collected by the company to improve the product, that there actually were 100k peer-reviewed publications per year in the field of nootropics, where most of the innovation behind the tech came from a >100B-parameters model trained on open-source nootropic chemistry instructions. Would such advancements be evidence for something major we’re not certain about (e.g. high bandwidth brain computer interface) or just evidence for increased productivity that would be reinjected into more nootropic investments?
I think those advancements could be evidence for both, depending on the details of how the nootropics work, etc. But it still seems worth distinguishing the two things conceptually. My objection in both cases is that only a small part of the evidence for the first comes from the causal impact of the second: i.e. if Codex gave crazy huge productivity improvements, I would consider that evidence for full code automation coming soon, but that’s mostly because it suggests that Codex can likely be improved to the point of FCA, not because it will make OpenAI’s progammers more productive.
The problem with arguing against that claim is that nobody knows whether transformers/scaling language models are sufficient for full code automation. To take your nootropics example, an analogy would be if nootropics were legal, did not have negative side effects, with a single company giving “beta access” (for now) to a new nootropic in unlimited amount at no cost to a market of tens of millions of users, that the data from using this nootropic was collected by the company to improve the product, that there actually were 100k peer-reviewed publications per year in the field of nootropics, where most of the innovation behind the tech came from a >100B-parameters model trained on open-source nootropic chemistry instructions. Would such advancements be evidence for something major we’re not certain about (e.g. high bandwidth brain computer interface) or just evidence for increased productivity that would be reinjected into more nootropic investments?
I think those advancements could be evidence for both, depending on the details of how the nootropics work, etc. But it still seems worth distinguishing the two things conceptually. My objection in both cases is that only a small part of the evidence for the first comes from the causal impact of the second: i.e. if Codex gave crazy huge productivity improvements, I would consider that evidence for full code automation coming soon, but that’s mostly because it suggests that Codex can likely be improved to the point of FCA, not because it will make OpenAI’s progammers more productive.