Your comparison does a disservice to the human’s sample efficiency in two ways:
You’re counting diverse data in the human’s environment, but you’re not comparing their performance on diverse tasks. Human’s are obviously better than GPT3 at interactive tasks, walking around, etc. For either kind of fair comparison text data & task, or diverse data & task, the human has far superior sample efficiency.
“fancy learning techniques” don’t count as data. If the human can get mileage out of them, all the better for the human’s sample efficiency.
So you seem to have it backwards when you say that the comparison that everyone is making is the “bad” one.
Thanks. Hmmm. I agree with #2, and should edit to clarify. I meant “fancy learning techniques that we could also do with our AIs if we wanted,” but maybe I’ll just avoid that can of worms for now.
For #1: We don’t know how well a human-sized artificial neural net would perform if it was trained on the quantity and variety of data that humans have. We haven’t done the experiment yet. However, my point is that for all we know it’s entirely possible that such a neural net would perform at about human level on all the tasks humans do. The people who are saying that modern neural nets are significantly less sample-efficient than humans are committed to denying this. (Or if they aren’t, then I don’t know what we are arguing about anymore?) They are committed to saying that we can extrapolate from e.g. GPT-3′s performance vs. training data to conclude that we’d need something trained a lot longer than a human (on similar-to-human-lifetime data) to reach human performance. One way they might run this argument is to point out that GPT-3 has already seen more text than any human ever. My reply is that if a human had seen as much text as GPT-3, and only text, nothing else they probably would have poor performance as well, certainly on every task that wasn’t a text-based task! Sorry for this oblique response to your point, if it is insufficient I can make a more direct one.
Your comparison does a disservice to the human’s sample efficiency in two ways:
You’re counting diverse data in the human’s environment, but you’re not comparing their performance on diverse tasks. Human’s are obviously better than GPT3 at interactive tasks, walking around, etc. For either kind of fair comparison text data & task, or diverse data & task, the human has far superior sample efficiency.
“fancy learning techniques” don’t count as data. If the human can get mileage out of them, all the better for the human’s sample efficiency.
So you seem to have it backwards when you say that the comparison that everyone is making is the “bad” one.
Thanks. Hmmm. I agree with #2, and should edit to clarify. I meant “fancy learning techniques that we could also do with our AIs if we wanted,” but maybe I’ll just avoid that can of worms for now.
For #1: We don’t know how well a human-sized artificial neural net would perform if it was trained on the quantity and variety of data that humans have. We haven’t done the experiment yet. However, my point is that for all we know it’s entirely possible that such a neural net would perform at about human level on all the tasks humans do. The people who are saying that modern neural nets are significantly less sample-efficient than humans are committed to denying this. (Or if they aren’t, then I don’t know what we are arguing about anymore?) They are committed to saying that we can extrapolate from e.g. GPT-3′s performance vs. training data to conclude that we’d need something trained a lot longer than a human (on similar-to-human-lifetime data) to reach human performance. One way they might run this argument is to point out that GPT-3 has already seen more text than any human ever. My reply is that if a human had seen as much text as GPT-3, and only text, nothing else they probably would have poor performance as well, certainly on every task that wasn’t a text-based task! Sorry for this oblique response to your point, if it is insufficient I can make a more direct one.