I agree that humans would do poorly in the experiment you outline. I think this shows that, like the language model, humans-with-one-second do not “understand” the code.
Haha, good point—yes. I guess what I should say is: Since humans would have performed just as poorly on this experiment, it doesn’t count as evidence that e.g. “current methods are fundamentally limited” or “artificial neural nets can’t truly understand concepts in the ways humans can” or “what goes on inside ANN’s is fundamentally a different kind of cognition from what goes on inside biological neural nets” or whatnot.
Haha, good point—yes. I guess what I should say is: Since humans would have performed just as poorly on this experiment, it doesn’t count as evidence that e.g. “current methods are fundamentally limited” or “artificial neural nets can’t truly understand concepts in the ways humans can” or “what goes on inside ANN’s is fundamentally a different kind of cognition from what goes on inside biological neural nets” or whatnot.
Oh yeah, I definitely agree that this is not strong evidence for typical skeptic positions (and I’d guess the authors would agree).