Do LLMs sometime simulate something akin to a dream?
When dreaming we sometimes simulate a very different person than our waking self, we can make decisions uncharacteristic of our own, we can experience a world very different then waking reality, and even sometime get implanted with memories we never experienced.
And still, I think most people would consider that simulated ‘dream person’ a sentient being, we experience that person as ourselves, it is imbued with our consciousness for the duration of the dream, we experience a dream as a living person, as ourselves, but it is not always our waking selves.
Keeping this thought in mind, lets ask:
“What is happening inside an LLM when we ask it to continue a short story from the point of view of some imaginary character”
″What is happening inside an LLM when we ask it to think ‘step by step’ about a problem”
The short easy and correct answer is: “We don’t know”
We can theoretically follow the transformers firing and their activation of each other, but just like following human neuron interactions, we still cant learn enough from this exercise to point where and how sentience is held.
Given the similarities in architectures between an LLM and a brain, combined with the method of training guided by a human feedback, I wonder if we accidently trigger something similar to a human dream in those systems, something similar to our temporary dream person who is just there for a simple short task or thought to be simulated and then terminated.
I propose that both nature and gradient descent have found some common abstractions and logical structures for some tasks, and a short simulation of a thought or an action in a dream or LLM might sometimes be very similar.
And if the answer is ‘yes, LLMs sometimes replicate a human dream’, and we consider ourselves sentient while dreaming then the ramification is that those LLMs do sometimes birth something we would consider consciousness for a short time.
tldr: I aim to propose that it is possible that for some queries to activate something similar to a human dreaming in its level of sentience
Humans (when awake, as long as they’re not actors or mentally ill) have, roughly speaking, a single personality. The base model training of an LLM trains it to attempt to simulate anyone on the internet/in stories, so it doesn’t have a single personality: it contains multitudes. Instruct training and prompting can try to overcome this, but they’re never entirely successful.
More details here.
agreed, but when preforming a singular task, they might have to simulate a single type of thought or personality
Humans usually don’t dream in response to queries but when their minds are more idle. Besides you not being able to rule it out, why do you think it happens in the AI case?
i don’t try to claim LLMs dream like humans, I’m trying to say they might sometimes experience something similar to a human dream while preforming next word prediction