Boltzmann brains are random, and are exponentially unlikely to correlate with anything in their environment; however, language model forward passes are given information which has some meaningful connection to reality, if nothing else then the human interacting with the language model reveals what they are thinking about. this is accurate information about reality, and it’s persistent between evaluations—on successive evaluations in the same conversation (say, one word to the next, or one message to the next), the information available is highly correlated, and all the activations of previous words are available. so while I agree that their sense of time is spiky and non-smooth, I don’t think it’s accurate to compare them to random fluctuation brains.
I think of the classic Boltzmann brain thought experiment as a brain that thinks it’s human, and has a brain state that includes a coherent history of human experience.
This is actually interestingly parallel to an LLM forward pass, where the LLM has a context that appears to be a past, but may or may not be (eg apparent past statements by the LLM may have been inserted by the experimenter and not reflect an actual dialogue history). So although it’s often the case that past context is persistent between evaluations, that’s not a necessary feature at all.
I guess I don’t think, with a Boltzmann brain, that ongoing correlation is very relevant since (IIRC) the typical Boltzmann brain exists only for a moment (and of those that exist longer, I expect that their typical experience is of their brief moment of coherence dissolving rapidly).
That said, I agree that if you instead consider the (vastly larger) set of spontaneously appearing cognitive processes, most of them won’t have anything like a memory of a coherent existence.
Boltzmann brains are random, and are exponentially unlikely to correlate with anything in their environment; however, language model forward passes are given information which has some meaningful connection to reality, if nothing else then the human interacting with the language model reveals what they are thinking about. this is accurate information about reality, and it’s persistent between evaluations—on successive evaluations in the same conversation (say, one word to the next, or one message to the next), the information available is highly correlated, and all the activations of previous words are available. so while I agree that their sense of time is spiky and non-smooth, I don’t think it’s accurate to compare them to random fluctuation brains.
I think of the classic Boltzmann brain thought experiment as a brain that thinks it’s human, and has a brain state that includes a coherent history of human experience.
This is actually interestingly parallel to an LLM forward pass, where the LLM has a context that appears to be a past, but may or may not be (eg apparent past statements by the LLM may have been inserted by the experimenter and not reflect an actual dialogue history). So although it’s often the case that past context is persistent between evaluations, that’s not a necessary feature at all.
I guess I don’t think, with a Boltzmann brain, that ongoing correlation is very relevant since (IIRC) the typical Boltzmann brain exists only for a moment (and of those that exist longer, I expect that their typical experience is of their brief moment of coherence dissolving rapidly).
That said, I agree that if you instead consider the (vastly larger) set of spontaneously appearing cognitive processes, most of them won’t have anything like a memory of a coherent existence.