No, I mean that quite literally. From Appendix B of ‘Climbing Towards NLU’:
‘Tasks like DROP (Dua et al., 2019) require interpretation of language into an external world; in the case of DROP, the world of arithmetic. To get a sense of how existing LMs might do at such a task, we let GPT-2 complete the simple arithmetic problem Three plus five equals. The five responses below, created in the same way as above, show that this problem is beyond the current capability of GPT-2, and, we would argue, any pure LM.‘ I found it because I went looking for falsifiable claims in ‘On the Dangers of Stochastic Parrots’ and couldn’t really find any, so I went back further to ‘Climbing Towards NLU’ on Ryan Greenblatt and Gwern’s recommendation.
I think you don’t mean this literally as the paper linked does not argue for this actual position. Can you clarify exactly what you mean?
No, I mean that quite literally. From Appendix B of ‘Climbing Towards NLU’:
‘Tasks like DROP (Dua et al., 2019) require interpretation of language into an external world; in the case of DROP, the world of arithmetic. To get a sense of how existing LMs might do at such a task, we let GPT-2 complete the simple arithmetic problem Three plus five equals. The five responses below, created in the same way as above, show that this problem is beyond the current capability of GPT-2, and, we would argue, any pure LM.‘
I found it because I went looking for falsifiable claims in ‘On the Dangers of Stochastic Parrots’ and couldn’t really find any, so I went back further to ‘Climbing Towards NLU’ on Ryan Greenblatt and Gwern’s recommendation.
Oh wow—missed that. Thanks!