Also, in the Q&A session of the lecture, people discuss some difficult analogical reasoning tasks that most people resort to solving “symbolically” and iteratively, for example, by trying to apply different possible patterns and mentally check whether there are no logical contradictions, GPT-3 somehow manages to solve too, i.e., in a single auto-regressive rollout. This reminds me of GPT can write Quines now (GPT-4): both these capabilities seem to point to a powerful reasoning capability that Transformers have but people don’t.
Also, in the Q&A session of the lecture, people discuss some difficult analogical reasoning tasks that most people resort to solving “symbolically” and iteratively, for example, by trying to apply different possible patterns and mentally check whether there are no logical contradictions, GPT-3 somehow manages to solve too, i.e., in a single auto-regressive rollout. This reminds me of GPT can write Quines now (GPT-4): both these capabilities seem to point to a powerful reasoning capability that Transformers have but people don’t.