The later post still reiterates the main claims from this post, though.
This post: “Few-shot learning results are philosophically confusing and numerically unimpressive; the GPT-3 paper was largely a collection of few-shot learning results, therefore the paper was disappointing”
The later post: “Few-shot learning results are philosophically confusing and numerically unimpressive; therefore we don’t understand GPT-3′s capabilities well and should use more ‘ecological’ methods instead”
Many commenters on this post disagreed with the part that both posts share (“Few-shot learning results are philosophically confusing and numerically unimpressive”).
The later post still reiterates the main claims from this post, though.
This post: “Few-shot learning results are philosophically confusing and numerically unimpressive; the GPT-3 paper was largely a collection of few-shot learning results, therefore the paper was disappointing”
The later post: “Few-shot learning results are philosophically confusing and numerically unimpressive; therefore we don’t understand GPT-3′s capabilities well and should use more ‘ecological’ methods instead”
Many commenters on this post disagreed with the part that both posts share (“Few-shot learning results are philosophically confusing and numerically unimpressive”).