I think this is very possibly true—one reason why you might think that children have much higher sample efficiency (meaning, can learn from very few examples) compared to ML systems, is that they can experiment with the world, which lets them build a detailed and accurate model of basic physics and causality, which is a great base to learn from.
But of course language models have progressed much further and faster than embodiment proponents expected, so the jury is very much still out.
I think one productive question is: what is the cheapest experiment we could do that would convincingly demonstrate that one side or the other of this argument is right? Is there something easier than e.g. Waymo, which is the largest scale robotics experiment ever put together? And note that Waymo hasn’t succeeded yet either, and arguably is making slower progress than LLMs.
I think this is very possibly true—one reason why you might think that children have much higher sample efficiency (meaning, can learn from very few examples) compared to ML systems, is that they can experiment with the world, which lets them build a detailed and accurate model of basic physics and causality, which is a great base to learn from.
But of course language models have progressed much further and faster than embodiment proponents expected, so the jury is very much still out.
I think one productive question is: what is the cheapest experiment we could do that would convincingly demonstrate that one side or the other of this argument is right? Is there something easier than e.g. Waymo, which is the largest scale robotics experiment ever put together? And note that Waymo hasn’t succeeded yet either, and arguably is making slower progress than LLMs.