Space itself is symmetrical, so it is equally possible that the world is behaving in the way we think it is, with objects falling down and that the world is behaving in a super wild and improbable way: the earth is above us and things actually fall up.
The situation is symmetrical. We define past and future in terms of entropy. Your weird and improbable universe is an upside down map of the same territory.
Also, note that large fluctuations are much much less likely than small ones. The “we are in a random fluctuation” theory predicts that we are almost certainly in the smallest fluctuation we could possibly fit in. (Think spontaneously assembled nanocomputer just big enough to run a single mind that counts as an observer in whatever anthropic theory we use. ) Even conditioned on our insanely unlikely experience so far, this hypothesis stubbornly predicts that the world we see isn’t real. (I am not sure whether it predicts that our experiences were noise, or we are on a slightly larger nanocomputer that is running an approximate physics simulation.)
Clearly some sort of anthropic magic or something happens here. We are not in a random fluctuation. Maybe the universe won’t last a hyperexponential amount of time. Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. (“All possible bitstrings, in order” is not a good law of physics, just because it contains us somewhere).
Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. (“All possible bitstrings, in order” is not a good law of physics, just because it contains us somewhere).
Another way of thinking about this, which amounts to the same thing: Holding the laws of physics constant, the Solomonoff prior will assign much more probability to a universe that evolves from a minimal-entropy initial state, than to one that starts off in thermal equilibrium. In other words:
Description 1: The laws of physics + The Big Bang
Description 2: The laws of physics + some arbitrary configuration of particles
Description 1 is much shorter than Description 2, because the Big Bang is much simpler to describe than some arbitrary configuration of particles. Even after the heat-death of the universe, it’s still simpler to describe it as “the Big Bang, 10^zillion years on” rather than by exhaustive enumeration of all the particles.
This dispenses with the “paradox” of Boltzmann Brains, and Roger Penrose’s puzzle about why the Big Bang had such low entropy despite its overwhelming improbability.
The entropy of the brain is approximately constant. A bit higher when you have the flu, a bit lower in the morning (when your body temperature is smaller). If we perceived past and future according to the direction in which the entropy of our brain increases, I would remember the next day when going to bed.
The human brain radiates waste heat. It is not a closed system. Lets think about a row of billiard balls on a large table. A larger ball is rolled over them, scattering several. Looking at where the billiard balls end up, we can deduce where the big ball went, but only if we assume the billiard balls started in a straight line. Imagine a large grid of tiny switches, every time they are pressed, they flip. If the grid starts off all 0, you can write on it. If it starts off random, you can’t. In all cases you have 2 systems, X and Y. Lets say that X starts in a state of nonzero entropy, and Y starts at all 0′s. X and Y can interact through a controlled not, sending a copy of X into Y. (Other interactions can send just limited partial info from X to Y) Then X can evolve somewhat, and maybe interact with some other systems, but a copy remains in Y. A recording of the past. You can’t make a recording of the past without some bits that start out all 0′s. You can’t record the future without bits that end up all 0′s. (Under naieve forward causality)
How do you make space for new memories? By forgetting old ones? Info can’t be destroyed. Your brain is taking in energy dense low entropy food and air, and radiating out the old memory as a pattern of waste heat. Maybe you were born with a big load of empty space that you slowly fill up. A big load of low entropy blank space can only be constructed from the negentropy source of food+air+cold.
The human brain radiates waste heat. It is not a closed system.
Sure, but are you saying that the human brain perceives the whole universe (including the heat it dissipated around) when deciding what to label as “past” or “future”? The entropy of the human body is approximately constant (as long as you are alive).
How do you make space for new memories? By forgetting old ones? Info can’t be destroyed. Your brain is taking in energy dense low entropy food and air, and radiating out the old memory as a pattern of waste heat. Maybe you were born with a big load of empty space that you slowly fill up. A big load of low entropy blank space can only be constructed from the negentropy source of food+air+cold.
Ok, we should clarify which entropy you are talking about. Since this is a post about thermodynamics, I assumed that we are talking about dS = dQ/T, that is the one entropy for which the second law was introduced. In that case, when your brain’s temperature goes down its entropy also goes down, no use splitting hair.
In the second paragraph, it seem instead that you are talking about an uncertainity measure, like Von Neumann entropy*. But the Von Neumann entropy of what? The brain is not a string or a probability distribution, so the VN entropy is ill-defined for a brain. But fine, let us suppose that we have agreed on an abstract model of the brain on which we can define a VN entropy (maybe its quantum density matrix, if one can define such a thing for a brain). Then:
in a closed system the fact that “Info can’t be destroyed [in a closed system]” means that the total “information” (in a sense, the total Von Neumann entropy) is a constant. It never increases nor decreases, so you can not use it to make a time arrow.
in an open system (like the brain) it can increase or decrease. When you sleep I guess that it should decrease, because in some sense the brain gets “tidied up”, many memories of the day are deleted, etc. But, again, when you go to bed your consciousness does not perceive the following morning as “past”.
*while the authenticity of the following quote is debated, it is worth to stress that they the thermodynamical entropy and the Shannon/Von Neumann entropy are indeed two entirely different things, while related in some specific contexts.
You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.
Imagine a water wheel. The direction the river flows in controls the direction that the wheel turns. The amount of water in the wheel doesn’t change.
Just because the total entropy doesn’t change over time, doesn’t mean the system is time symmetric. An electric circuit has a direction to it, even though the number of electrons in any position doesn’t change (except a bit in capacitors)
So the forward direction of time is the direction in which your brain is creating thermodynamic entropy. Run a brain forward, and it breaks up sugars to process information, and expels waste heat. Run it backwards and waste heat comes in and through huge fluke, jostles the atoms in just the right way to make sugars.
Entropic rules are more subtle than just tracking the total amount of entropy. You can track the total amount of entropy, and get meaningful restrictions on what is allowed to happen. But you can also get meaningful restrictions on which bits are allowed to be in which places. Restrictions that can also be understood in terms of state spaces. Restrictions that stop you finding out about quantum random events that will happen in the future.
The physical entropy and the von Neuman information theory entropy are intricately interrelated.
in a sense, the total Von Neumann entropy) is a constant. It never increases nor decreases, so you can not use it to make a time arrow.
This is technically true of the universe as a whole. Suppose you take a quantum hard drive filled with 0′s, and fill it with bits in an equal superposition of 0 and 1 by applying a Hadamard gate. You can take those bits and apply the gate again to get the 0′s back. Entropy has not yet increased. Now print those bits. The universe branches into 2^bit count quantum branches. The entropy of the whole structure hasn’t increased, but the entropy of a typical individual branch is higher than that of the whole structure. In principle, all of these branches could be recombined, in practice the printer has radiated waste heat that speeds away at light speed, and there probably aren’t aliens rushing in from all directions carrying every last photon back to us.
The universe is low entropy (like < a kilobyte) in komolgorov complexity based entropy. Suppose you have a crystal made of 2 elements. One of the elements only appears in prime numbered coordinates in the atomic structure. Superadvanced nanomachines could exploit this pattern to separate the elements without creating waste heat. Current human tech would just treat them as randomly intermixed, and melt them all down or dissolve them. A technique that will work however the atoms were arranged, and must produce some waste heat. Entropy is just anything you can’t practically uncompute. https://en.wikipedia.org/wiki/Uncomputation
Remembering something into an empty memory buffer is an allowed operation of a reversible computer, just do a controled not.
When the brain forgets something, the thing its forgetting isn’t there any more, so it must radiate that bit away as waste heat.
The universe as a whole behaves kind of like a reversible circuit. Sure, its actually continuous and quantum, but that doesn’t make that much difference. The universe has an operation of scramble, something that is in principle reversible, but in practice will never be reversed.
If you look at a fixed reversible circuit, it is a bijection from inputs to outputs. (X,X) and (X,0) are both states that have half the maximum entropy, and one can easily be turned into the other. The universe doesn’t provide any instances of (X, X) to start off with (for non-zero X), but it provides lots of 0′s. When you come across a scrambled X, its easy to Cnot it with a nearby 0. So the number of copies of (X, X) patterns tends to increase. These are memories, photos, genes and all other forms of record of the past.
Imagine a water wheel. The direction the river flows in controls the direction that the wheel turns. The amount of water in the wheel doesn’t change.
In this case you do not say “the wheel rotates in the direction of water increase”, but “the wheel rotates in the direction of water flow”.
I can see how you could argue that “the consciousness perceives past and future according to the direction of time in which it radiated heath”. But, if you mean that heath flow (or some other entropic-related phenomenon) is the explaination for our time perception (just like the water flow explains the wheel, or the DC tension explains the current in a circuit), this seems to me a bold and extraordinary claim, that would need a lot more evidence, both theoretical and experimental.
This is technically true of the universe as a whole. Suppose you take a quantum hard drive filled with 0′s, and fill it with bits in an equal superposition of 0 and 1 by applying a Hadamard gate. You can take those bits and apply the gate again to get the 0′s back. Entropy has not yet increased. Now print those bits. The universe branches into 2^bit count quantum branches. The entropy of the whole structure hasn’t increased, but the entropy of a typical individual branch is higher than that of the whole structure.
Yes, whenever you pinch a density matrix, its entropy increases. It depends on your philosophical stance on measurement and decoherence whether the superposition could be retrieved.
In general, I am more on the skeptical side about the links between abstract information and thermodynamics (see for instance https://arxiv.org/abs/1905.11057). It is my job, so I can not be entirely skeptic. But there is a lot of work to do before we can claim to have derived thermodynamics from quantum principles (at the state of the art, there is not even a consensus among the experts about what the appropriate definitions of work and heath should be for a quantum system).
Anyway, does the brain actually check whether it can uncompute something? How is this related with the direction in which we perceive the past? The future can (in principle) be computed, and the past can not be uncomputed; yet we know about the past and not about the future: is this that obvious?
[...] The universe as a whole behaves kind of like a reversible circuit.
This is another strong statement. Maybe in the XVIII century you would have said that the universe is a giant clock (mechanical philosophy), and in the XIX century you would have said that the brain is basically a big telephone switchboard.
I am not saying that it is wrong. Every new technology can provide useful insights about nature. But I think we should beware not to take these analogies too far.
Space itself is symmetrical, so it is equally possible that the world is behaving in the way we think it is, with objects falling down and that the world is behaving in a super wild and improbable way: the earth is above us and things actually fall up.
The situation is symmetrical. We define past and future in terms of entropy. Your weird and improbable universe is an upside down map of the same territory.
Also, note that large fluctuations are much much less likely than small ones. The “we are in a random fluctuation” theory predicts that we are almost certainly in the smallest fluctuation we could possibly fit in. (Think spontaneously assembled nanocomputer just big enough to run a single mind that counts as an observer in whatever anthropic theory we use. ) Even conditioned on our insanely unlikely experience so far, this hypothesis stubbornly predicts that the world we see isn’t real. (I am not sure whether it predicts that our experiences were noise, or we are on a slightly larger nanocomputer that is running an approximate physics simulation.)
Clearly some sort of anthropic magic or something happens here. We are not in a random fluctuation. Maybe the universe won’t last a hyperexponential amount of time. Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. (“All possible bitstrings, in order” is not a good law of physics, just because it contains us somewhere).
Another way of thinking about this, which amounts to the same thing: Holding the laws of physics constant, the Solomonoff prior will assign much more probability to a universe that evolves from a minimal-entropy initial state, than to one that starts off in thermal equilibrium. In other words:
Description 1: The laws of physics + The Big Bang
Description 2: The laws of physics + some arbitrary configuration of particles
Description 1 is much shorter than Description 2, because the Big Bang is much simpler to describe than some arbitrary configuration of particles. Even after the heat-death of the universe, it’s still simpler to describe it as “the Big Bang, 10^zillion years on” rather than by exhaustive enumeration of all the particles.
This dispenses with the “paradox” of Boltzmann Brains, and Roger Penrose’s puzzle about why the Big Bang had such low entropy despite its overwhelming improbability.
The entropy of the brain is approximately constant. A bit higher when you have the flu, a bit lower in the morning (when your body temperature is smaller). If we perceived past and future according to the direction in which the entropy of our brain increases, I would remember the next day when going to bed.
The human brain radiates waste heat. It is not a closed system. Lets think about a row of billiard balls on a large table. A larger ball is rolled over them, scattering several. Looking at where the billiard balls end up, we can deduce where the big ball went, but only if we assume the billiard balls started in a straight line. Imagine a large grid of tiny switches, every time they are pressed, they flip. If the grid starts off all 0, you can write on it. If it starts off random, you can’t. In all cases you have 2 systems, X and Y. Lets say that X starts in a state of nonzero entropy, and Y starts at all 0′s. X and Y can interact through a controlled not, sending a copy of X into Y. (Other interactions can send just limited partial info from X to Y) Then X can evolve somewhat, and maybe interact with some other systems, but a copy remains in Y. A recording of the past. You can’t make a recording of the past without some bits that start out all 0′s. You can’t record the future without bits that end up all 0′s. (Under naieve forward causality)
How do you make space for new memories? By forgetting old ones? Info can’t be destroyed. Your brain is taking in energy dense low entropy food and air, and radiating out the old memory as a pattern of waste heat. Maybe you were born with a big load of empty space that you slowly fill up. A big load of low entropy blank space can only be constructed from the negentropy source of food+air+cold.
Sure, but are you saying that the human brain perceives the whole universe (including the heat it dissipated around) when deciding what to label as “past” or “future”? The entropy of the human body is approximately constant (as long as you are alive).
Ok, we should clarify which entropy you are talking about. Since this is a post about thermodynamics, I assumed that we are talking about dS = dQ/T, that is the one entropy for which the second law was introduced. In that case, when your brain’s temperature goes down its entropy also goes down, no use splitting hair.
In the second paragraph, it seem instead that you are talking about an uncertainity measure, like Von Neumann entropy*. But the Von Neumann entropy of what? The brain is not a string or a probability distribution, so the VN entropy is ill-defined for a brain. But fine, let us suppose that we have agreed on an abstract model of the brain on which we can define a VN entropy (maybe its quantum density matrix, if one can define such a thing for a brain). Then:
in a closed system the fact that “Info can’t be destroyed [in a closed system]” means that the total “information” (in a sense, the total Von Neumann entropy) is a constant. It never increases nor decreases, so you can not use it to make a time arrow.
in an open system (like the brain) it can increase or decrease. When you sleep I guess that it should decrease, because in some sense the brain gets “tidied up”, many memories of the day are deleted, etc. But, again, when you go to bed your consciousness does not perceive the following morning as “past”.
*while the authenticity of the following quote is debated, it is worth to stress that they the thermodynamical entropy and the Shannon/Von Neumann entropy are indeed two entirely different things, while related in some specific contexts.
Imagine a water wheel. The direction the river flows in controls the direction that the wheel turns. The amount of water in the wheel doesn’t change.
Just because the total entropy doesn’t change over time, doesn’t mean the system is time symmetric. An electric circuit has a direction to it, even though the number of electrons in any position doesn’t change (except a bit in capacitors)
So the forward direction of time is the direction in which your brain is creating thermodynamic entropy. Run a brain forward, and it breaks up sugars to process information, and expels waste heat. Run it backwards and waste heat comes in and through huge fluke, jostles the atoms in just the right way to make sugars.
Entropic rules are more subtle than just tracking the total amount of entropy. You can track the total amount of entropy, and get meaningful restrictions on what is allowed to happen. But you can also get meaningful restrictions on which bits are allowed to be in which places. Restrictions that can also be understood in terms of state spaces. Restrictions that stop you finding out about quantum random events that will happen in the future.
The physical entropy and the von Neuman information theory entropy are intricately interrelated.
This is technically true of the universe as a whole. Suppose you take a quantum hard drive filled with 0′s, and fill it with bits in an equal superposition of 0 and 1 by applying a Hadamard gate. You can take those bits and apply the gate again to get the 0′s back. Entropy has not yet increased. Now print those bits. The universe branches into 2^bit count quantum branches. The entropy of the whole structure hasn’t increased, but the entropy of a typical individual branch is higher than that of the whole structure. In principle, all of these branches could be recombined, in practice the printer has radiated waste heat that speeds away at light speed, and there probably aren’t aliens rushing in from all directions carrying every last photon back to us.
The universe is low entropy (like < a kilobyte) in komolgorov complexity based entropy. Suppose you have a crystal made of 2 elements. One of the elements only appears in prime numbered coordinates in the atomic structure. Superadvanced nanomachines could exploit this pattern to separate the elements without creating waste heat. Current human tech would just treat them as randomly intermixed, and melt them all down or dissolve them. A technique that will work however the atoms were arranged, and must produce some waste heat. Entropy is just anything you can’t practically uncompute. https://en.wikipedia.org/wiki/Uncomputation
Remembering something into an empty memory buffer is an allowed operation of a reversible computer, just do a controled not.
When the brain forgets something, the thing its forgetting isn’t there any more, so it must radiate that bit away as waste heat.
The universe as a whole behaves kind of like a reversible circuit. Sure, its actually continuous and quantum, but that doesn’t make that much difference. The universe has an operation of scramble, something that is in principle reversible, but in practice will never be reversed.
If you look at a fixed reversible circuit, it is a bijection from inputs to outputs. (X,X) and (X,0) are both states that have half the maximum entropy, and one can easily be turned into the other. The universe doesn’t provide any instances of (X, X) to start off with (for non-zero X), but it provides lots of 0′s. When you come across a scrambled X, its easy to Cnot it with a nearby 0. So the number of copies of (X, X) patterns tends to increase. These are memories, photos, genes and all other forms of record of the past.
In this case you do not say “the wheel rotates in the direction of water increase”, but “the wheel rotates in the direction of water flow”.
I can see how you could argue that “the consciousness perceives past and future according to the direction of time in which it radiated heath”. But, if you mean that heath flow (or some other entropic-related phenomenon) is the explaination for our time perception (just like the water flow explains the wheel, or the DC tension explains the current in a circuit), this seems to me a bold and extraordinary claim, that would need a lot more evidence, both theoretical and experimental.
Yes, whenever you pinch a density matrix, its entropy increases. It depends on your philosophical stance on measurement and decoherence whether the superposition could be retrieved.
In general, I am more on the skeptical side about the links between abstract information and thermodynamics (see for instance https://arxiv.org/abs/1905.11057). It is my job, so I can not be entirely skeptic. But there is a lot of work to do before we can claim to have derived thermodynamics from quantum principles (at the state of the art, there is not even a consensus among the experts about what the appropriate definitions of work and heath should be for a quantum system).
Anyway, does the brain actually check whether it can uncompute something? How is this related with the direction in which we perceive the past? The future can (in principle) be computed, and the past can not be uncomputed; yet we know about the past and not about the future: is this that obvious?
This is another strong statement. Maybe in the XVIII century you would have said that the universe is a giant clock (mechanical philosophy), and in the XIX century you would have said that the brain is basically a big telephone switchboard.
I am not saying that it is wrong. Every new technology can provide useful insights about nature. But I think we should beware not to take these analogies too far.