I often see AI skeptics ask GPT-3 if a mouse is bigger than an elephant and it said yes. So obviously it’s stupid. This is like measuring a fish by its ability to climb.
The only thing GPT-3 could learn, the only thing it had access to, is a universe of text. A textual universe created by humans who do have access to the real world. This textual universe correlates with the real world, but it is not the same as the real world.
Humans generate the training data and GPT-3 is learning it. So in a sense GPT-3 is less intelligent, because it is only able to access a version of the universe that is already a messy approximation by humans doing messy approximations.
In GPT’s universe there is no space, no movement, no inertia, no gravity. So it seems fundamentally flawed to me to then say, We trained it on X and it didn’t learn Y.
All GPT-3 is doing is predicting the next word of text. Frankly it’s incredible it does as well as it does at all these other things.
Machine Learning Researchers
Paraphrased from this podcast: https://youtu.be/HrV19SjKUss?t=5870