you might say that it has an internal program that allows it to be faster and more accurate than a child
My point is that children can solve ARC not because they have some amazing abstract spherical-in-vacuum reasoning abilities which LLMs lack, but because they have human-specific pattern recognition ability (like geometric shapes, number sequences, music, etc). Brains have strong inductive biases, after all. If you train a model purely on the prediction of a non-anthropogenic physical environment, I think this model will struggle with solving ARC even if it has a sophisticated multi-level physical model of reality, because regular ARC-style repeating shapes are not very probable on priors.
In my impression, in debates about ARC, AI people do not demonstrate a very high level of deliberation. Chollet and those who agree with him are like “nah, LLMs are nothing impressive, just interpolation databases!” and LLM enthusiasts are like “scaling will solve everything!!!!111!” Not many people seem to consider “something interesting is going on here. Maybe we can learn something important about how humans and LLMs work that doesn’t fit into simple explanation templates.”
Let’s start with the end:
Why do you think that they don’t already do that?
My point is that children can solve ARC not because they have some amazing abstract spherical-in-vacuum reasoning abilities which LLMs lack, but because they have human-specific pattern recognition ability (like geometric shapes, number sequences, music, etc). Brains have strong inductive biases, after all. If you train a model purely on the prediction of a non-anthropogenic physical environment, I think this model will struggle with solving ARC even if it has a sophisticated multi-level physical model of reality, because regular ARC-style repeating shapes are not very probable on priors.
In my impression, in debates about ARC, AI people do not demonstrate a very high level of deliberation. Chollet and those who agree with him are like “nah, LLMs are nothing impressive, just interpolation databases!” and LLM enthusiasts are like “scaling will solve everything!!!!111!” Not many people seem to consider “something interesting is going on here. Maybe we can learn something important about how humans and LLMs work that doesn’t fit into simple explanation templates.”