I don’t think I disagree with you on anything; my point is more “what does creating new knowledge mean?”. For example, the difference between interpolation and extrapolation might be a rigorous way of framing it. Someone else posted a LeCun paper on that here; he found that extrapolation is the regime in which most ML systems work and assumes that the same must be true of deep learning ones. But for example if there was a phase transition of some kind in the learning process that makes some systems move to an interpolation regime, that could explain things. Overall I agree that none of this should be a fundamental difference with human cognition. It could be a current one, but it would at least be possible to overcome in principle. Or LLMs could already be in this new regime, since after all, not like anyone checked yet (honestly though, it might not be too hard to do so, and we should probably try).
I don’t think I disagree with you on anything; my point is more “what does creating new knowledge mean?”. For example, the difference between interpolation and extrapolation might be a rigorous way of framing it. Someone else posted a LeCun paper on that here; he found that extrapolation is the regime in which most ML systems work and assumes that the same must be true of deep learning ones. But for example if there was a phase transition of some kind in the learning process that makes some systems move to an interpolation regime, that could explain things. Overall I agree that none of this should be a fundamental difference with human cognition. It could be a current one, but it would at least be possible to overcome in principle. Or LLMs could already be in this new regime, since after all, not like anyone checked yet (honestly though, it might not be too hard to do so, and we should probably try).