I agree. However, I doubt that the examples from argument 4 are in the training, I think this is the strongest argument. The different scenario came out of my mind and I didn’t find any study / similar topic research with the same criteria as in the appendix (I didn’t search a lot though).
I agree that, tautologically, there is some implicit model that enables the LLM to infer what will happen in the case of the ball. I also think that there is a reasonably strong argument that whatever this model it, it in some way maps to “understanding of causes”—but also think that there’s an argument the other way, that any map between the implicit associations and reality is so convoluted that almost all of the complexity is contained within our understanding of how language maps to the world. This is a direct analog of Aaronson’s “Waterfall Argument”—and the issue is that there’s certainly lots of complexity in the model, but we don’t know how complex the map between the model and reality is—and because it routes through human language, the stochastic parrot argument is, I think, that the understanding is mostly contained in the way humans perceive language.
I agree. However, I doubt that the examples from argument 4 are in the training, I think this is the strongest argument. The different scenario came out of my mind and I didn’t find any study / similar topic research with the same criteria as in the appendix (I didn’t search a lot though).
I agree that, tautologically, there is some implicit model that enables the LLM to infer what will happen in the case of the ball. I also think that there is a reasonably strong argument that whatever this model it, it in some way maps to “understanding of causes”—but also think that there’s an argument the other way, that any map between the implicit associations and reality is so convoluted that almost all of the complexity is contained within our understanding of how language maps to the world. This is a direct analog of Aaronson’s “Waterfall Argument”—and the issue is that there’s certainly lots of complexity in the model, but we don’t know how complex the map between the model and reality is—and because it routes through human language, the stochastic parrot argument is, I think, that the understanding is mostly contained in the way humans perceive language.