I think that one solution is to use LLM to generate only a short answers where probability of error is small, but then use these answers as prompts to generate more short answers. This is how different auto-gpt works. Short answer is a plan of solving a task.
We can also use LLM to check previous short answers for correctness.
More generally, LeCun’s argument can be applied to other generative processes like evolution or science, but we know that there are error-correcting mechanisms in them, like natural selection and experiment.
I think that one solution is to use LLM to generate only a short answers where probability of error is small, but then use these answers as prompts to generate more short answers. This is how different auto-gpt works. Short answer is a plan of solving a task.
We can also use LLM to check previous short answers for correctness.
More generally, LeCun’s argument can be applied to other generative processes like evolution or science, but we know that there are error-correcting mechanisms in them, like natural selection and experiment.