The thermodynamic arrow of time) says that we tend to end up in macrostates (states of knowledge) that contain many microstates, which is completely compatible with time-symmetric evolution of microstates. Basically physics is like a random walk, which is time-symmetric but you tend to end up in bigger countries. (Bigger countries correspond to macrostates near equilibrium, because there are more ways to arrange two molecules with velocity 10 than one with velocity 0 and another with velocity 20. The difference is exponential in the number of molecules, so the second law of thermodynamics is an iron law indeed.)
The usual problem with that story is Loschmidt’s paradox: if we have a glass of hot water with some ice cubes floating in it, the most probable future of that system is a glass of uniformly warm water, but then so is its most probable past, according to the exact same Bayesian reasoning. Putting that to the extreme, you should conclude that every person you see was a decomposing (recomposing?) corpse a minute ago. That seems weird!
The usual resolution to that paradox is the Past Hypothesis: for predicting the most probable past of a system, we need to condition not just on the present, but also on a very low-entropy distant past. For example, a uniform distribution of matter in the early universe would do the job, because it would be very far from gravitational equilibrium. See this write by Huw Price for a simple explanation.
The trouble is that the Past Hypothesis isn’t completely satisfying. Leaving aside the question of how we can infer the distant past except by looking at the present, in the overall soup of all past and future states it’s still much more likely that any particular low entropy state (like ours) came from a higher entropy one, by pure dumb chance. If only because the future universe will be in equilibrium for a long time, enough for many fluctuations to arise. So you must assume that you’re the smallest possible fluctuation compatible with your experience, which is known as a Boltzmann brain. Basically your whole vision will turn into TV static in the next second. That’s even worse than recomposing corpses!
So what do we make of this? I’ve toyed with the idea that K-complexity might determine which laws of physics we’re likely to see. If you have a bunch of bits describing a world that looks lawful like ours, without recomposing corpses or vision turning into static, then the most likely (K-simplest) future evolution of these bits will follow the same laws, whatever they are. That still leaves the question of figuring out the laws, but at least gives a hint why we aren’t Boltzmann brains, and also why the early universe was simple. That sounds promising! On the other hand, K-complexity feels like a shiny new hammer that can lead to all sorts of paradoxes as well, so we should use it carefully.
The story so far:
The thermodynamic arrow of time) says that we tend to end up in macrostates (states of knowledge) that contain many microstates, which is completely compatible with time-symmetric evolution of microstates. Basically physics is like a random walk, which is time-symmetric but you tend to end up in bigger countries. (Bigger countries correspond to macrostates near equilibrium, because there are more ways to arrange two molecules with velocity 10 than one with velocity 0 and another with velocity 20. The difference is exponential in the number of molecules, so the second law of thermodynamics is an iron law indeed.)
The usual problem with that story is Loschmidt’s paradox: if we have a glass of hot water with some ice cubes floating in it, the most probable future of that system is a glass of uniformly warm water, but then so is its most probable past, according to the exact same Bayesian reasoning. Putting that to the extreme, you should conclude that every person you see was a decomposing (recomposing?) corpse a minute ago. That seems weird!
The usual resolution to that paradox is the Past Hypothesis: for predicting the most probable past of a system, we need to condition not just on the present, but also on a very low-entropy distant past. For example, a uniform distribution of matter in the early universe would do the job, because it would be very far from gravitational equilibrium. See this write by Huw Price for a simple explanation.
The trouble is that the Past Hypothesis isn’t completely satisfying. Leaving aside the question of how we can infer the distant past except by looking at the present, in the overall soup of all past and future states it’s still much more likely that any particular low entropy state (like ours) came from a higher entropy one, by pure dumb chance. If only because the future universe will be in equilibrium for a long time, enough for many fluctuations to arise. So you must assume that you’re the smallest possible fluctuation compatible with your experience, which is known as a Boltzmann brain. Basically your whole vision will turn into TV static in the next second. That’s even worse than recomposing corpses!
So what do we make of this? I’ve toyed with the idea that K-complexity might determine which laws of physics we’re likely to see. If you have a bunch of bits describing a world that looks lawful like ours, without recomposing corpses or vision turning into static, then the most likely (K-simplest) future evolution of these bits will follow the same laws, whatever they are. That still leaves the question of figuring out the laws, but at least gives a hint why we aren’t Boltzmann brains, and also why the early universe was simple. That sounds promising! On the other hand, K-complexity feels like a shiny new hammer that can lead to all sorts of paradoxes as well, so we should use it carefully.
What do you think?