I think seems to be a very accurate abstraction of what is happening. During sleep, the brain consolidates (compresses and throws away) information. This would be equivalent to summarising the context window + discussion so far, and adding it to a running ‘knowledge graph’. I would be surprised if someone somewhere has not tried this already on LLMs—summarising the existing context + discussion, formalising it in an external knowledge graph, and allowing the LLM to do RAG over this during inference in future.
Although, I do think LLM hallucinations and brain hallucinations arise via separate mechanisms. Especially there is evidence showing human hallucinations (sensory processing errors) occur as an inability of the brain’s top-down inference (the bayesian ‘what I expect to see based on priors’) to happen correctly. There is instead increased reliance on bottom-up processing (https://www.neuwritewest.org/blog/why-do-humans-hallucinate-on-little-sleep).
Thanks for your comment! On further reflection I think you’re right about the difference between LLM hallucinations and what’s commonly meant when humans refer to “hallucination.” I think maybe the better comparison is between LLMs and human confabulation, which would be seen in something like Korsakoff syndrome, where anterograde and retrograde amnesia result in the tendency to invent memories that have no basis in reality to fill a gap.
I guess to progress from here I’ll need to take a dive into neural entropy.
I think seems to be a very accurate abstraction of what is happening. During sleep, the brain consolidates (compresses and throws away) information. This would be equivalent to summarising the context window + discussion so far, and adding it to a running ‘knowledge graph’. I would be surprised if someone somewhere has not tried this already on LLMs—summarising the existing context + discussion, formalising it in an external knowledge graph, and allowing the LLM to do RAG over this during inference in future.
Although, I do think LLM hallucinations and brain hallucinations arise via separate mechanisms. Especially there is evidence showing human hallucinations (sensory processing errors) occur as an inability of the brain’s top-down inference (the bayesian ‘what I expect to see based on priors’) to happen correctly. There is instead increased reliance on bottom-up processing (https://www.neuwritewest.org/blog/why-do-humans-hallucinate-on-little-sleep).
Thanks for your comment! On further reflection I think you’re right about the difference between LLM hallucinations and what’s commonly meant when humans refer to “hallucination.” I think maybe the better comparison is between LLMs and human confabulation, which would be seen in something like Korsakoff syndrome, where anterograde and retrograde amnesia result in the tendency to invent memories that have no basis in reality to fill a gap.
I guess to progress from here I’ll need to take a dive into neural entropy.