I feel like this piece is pretty expansive in the specific claims it makes relative to the references given.
I don’t think the small, specific trial in [3] supports the general claim that “Current LLMs reduce the human labor and cognitive costs of programming by about 2x.”
I don’t think [10] says anything substantive about the claim “Fine tuning pushes LLMs to superhuman expertise in well-defined fields that use machine readable data sets.”
I don’t think [11] strongly supports a general claim that (today’s) LLMs can “Recognize complex patterns”, and [12] feels like very weak evidence for general claims that today’s LLMs can “Recursive troubleshoot to solve problems”.
The above are the result of spot-checking and are not meant to be exhaustive.
I feel like this piece is pretty expansive in the specific claims it makes relative to the references given.
I don’t think the small, specific trial in [3] supports the general claim that “Current LLMs reduce the human labor and cognitive costs of programming by about 2x.”
I don’t think [10] says anything substantive about the claim “Fine tuning pushes LLMs to superhuman expertise in well-defined fields that use machine readable data sets.”
I don’t think [11] strongly supports a general claim that (today’s) LLMs can “Recognize complex patterns”, and [12] feels like very weak evidence for general claims that today’s LLMs can “Recursive troubleshoot to solve problems”.
The above are the result of spot-checking and are not meant to be exhaustive.