This argument proves too much in the sense that its generalization is simply a standard argument of why exact prediction of future sequences is difficult (exponentially diverging).
The solutions (which humans use) are fairly straightforward to apply to LLMs: 1.) we don’t condition only on our own predictions, we update on observations. (For LLMs this amounts to using react style prompting where the LLM’s outputs are always balanced interspersed with observations from the world and/or inputs from humans). 2.) We use approximate abstract future prediction/planning, which LLMs are also amenable to.
This argument proves too much in the sense that its generalization is simply a standard argument of why exact prediction of future sequences is difficult (exponentially diverging).
The solutions (which humans use) are fairly straightforward to apply to LLMs: 1.) we don’t condition only on our own predictions, we update on observations. (For LLMs this amounts to using react style prompting where the LLM’s outputs are always balanced interspersed with observations from the world and/or inputs from humans). 2.) We use approximate abstract future prediction/planning, which LLMs are also amenable to.