memorization and pattern matching rather than reasoning and problem-solving abilities
In my opinion, this does not correspond to a principled distinction at the level of computation.
For intelligences that employ consciousness in order to do some of these things, there may be a difference in terms of mechanism. Reasoning and pattern matching sound like they correspond to different kinds of conscious activity.
But if we’re just talking about computation… a syllogism can be implemented via pattern matching, a pattern can be completed by a logical process (possibly probabilistic).
But if we’re just talking about computation… a syllogism can be implemented via pattern matching, a pattern can be completed by a logical process (possibly probabilistic).
Perhaps, but deep learning models are still failing at ARC. My guess (and Chollet’s) is that they will continue to fail at ARC unless they are trained on that kind of data (which goes against the point of the benchmark) or you add something else that actually resolves this failure in deep learning models. It may be able to pattern-match to reasoning-like behaviour, but only if specifically trained on that kind of data. No matter how much you scale it up, it will still fail to generalize to anything not local in its training data distribution.
In my opinion, this does not correspond to a principled distinction at the level of computation.
For intelligences that employ consciousness in order to do some of these things, there may be a difference in terms of mechanism. Reasoning and pattern matching sound like they correspond to different kinds of conscious activity.
But if we’re just talking about computation… a syllogism can be implemented via pattern matching, a pattern can be completed by a logical process (possibly probabilistic).
Perhaps, but deep learning models are still failing at ARC. My guess (and Chollet’s) is that they will continue to fail at ARC unless they are trained on that kind of data (which goes against the point of the benchmark) or you add something else that actually resolves this failure in deep learning models. It may be able to pattern-match to reasoning-like behaviour, but only if specifically trained on that kind of data. No matter how much you scale it up, it will still fail to generalize to anything not local in its training data distribution.