I think it’s important to distinguish between the following two claims:
1. If GPT-3 has a world model, that model is inaccurate. 2. GPT-3 has no world model.
Claim 1. is certainly true, if only for the reason that real-world agents—including humans—are fallible (and perfectly accurate world models are not essential for competent practice). There’s no reason to suppose that GPT-3 would be any different.
I guess one might argue that the safe applications of GPT-3 require it to have a world model that is at least as accurate, in every domain, as individual human models of the world. Marcus seems to believe that this can’t be achieved by LLMs using statistical ML methods on feasibly available training sets. I don’t find his arguments persuasive, although his conclusion might happen to be correct (since the success criterion in question is very stringent).
Claim 2. is the conceptually problematic one, for all the reasons you describe.
For what it’s worth, the following post summarises an experimental study in which the authors argue that a LLM demonstrably develops a model of a toy world (the board for the game Othello) when trained on synthetic data from that toy world (valid move sequences). https://thegradient.pub/othello/
I think it’s important to distinguish between the following two claims:
1. If GPT-3 has a world model, that model is inaccurate.
2. GPT-3 has no world model.
Claim 1. is certainly true, if only for the reason that real-world agents—including humans—are fallible (and perfectly accurate world models are not essential for competent practice). There’s no reason to suppose that GPT-3 would be any different.
I guess one might argue that the safe applications of GPT-3 require it to have a world model that is at least as accurate, in every domain, as individual human models of the world. Marcus seems to believe that this can’t be achieved by LLMs using statistical ML methods on feasibly available training sets. I don’t find his arguments persuasive, although his conclusion might happen to be correct (since the success criterion in question is very stringent).
Claim 2. is the conceptually problematic one, for all the reasons you describe.
For what it’s worth, the following post summarises an experimental study in which the authors argue that a LLM demonstrably develops a model of a toy world (the board for the game Othello) when trained on synthetic data from that toy world (valid move sequences).
https://thegradient.pub/othello/