I think this is near to the core of our disagreement. It seems self-evident that two true laws/descriptions cannot give different predictions about the same system; otherwise, they would not both be true.
It’s self-evident that that two true laws/descriptions can’t give contradictory predictions, but in the example I gave there is no contradiction involved. The laws at the fundamental level are invariant under time reversal, but this does not entail that a universe governed by those laws must be invariant under time reversal, so there’s nothing contradictory about there being another law that is not time reversal invariant.
If two mathematical objects (as laws of physics tend to be) always yield the same results, it seems natural to try and prove their equivalence.
What do you mean by “yield the same results”? The Second Law makes predictions about the entropy of composite systems. The fundamental laws make predictions about quantum field configurations. These don’t seem like yielding the same results. Of course, the results have to be consistent in some broad sense, but surely consistency does not imply equivalency. I think the intuitions you describe here are motivated by nomic reductionism, and they illustrate the difference between thinking of laws as rules and thinking of them as descriptions.
So the question arises, “why should the Second Law of Thermodynamics be proved in terms of more “fundamental” laws, rather than the other way around?” (this, if I’m interpreting you correctly, is the double standard)
No. I don’t take it for granted that either law can be reduced to the other one. It is not necessary that the salient patterns at a non-fundamental level of description are merely a consequence of salient patterns at a lower level of descriptions.
I’m not qualified to assess the validity of the Weyl curvature hypothesis or of the spontaneous eternal inflation model. However, I’ve always understood that the increase in entropy is simply caused by the boundry conditions of the universe, not any time-asymmetry of the laws of physics.
Well, yes, if the Second Law holds, then the early universe must have had low entropy, but many physicists don’t think this is a satisfactory explanation by itself. We could explain all kinds of things by appealing to special boundary conditions but usually we like our explanations to be based on regularities in nature. The Weyl curvature hypothesis and spontaneous eternal inflation are attempts to explain why the early universe had low entropy.
Incidentally, while there are many heuristic arguments that the early universe had a low entropy (such as appeal to its homogeneity), I have yet to see a mathematically rigorous argument. The fact is, we don’t really know how to apply the standard tools of statistical mechanics to a system like the early universe.
What do you mean by “yield the same results”? The Second Law makes predictions about the entropy of composite systems. The fundamental laws make predictions about quantum field configurations. These don’t seem like yielding the same results. Of course, the results have to be consistent in some broad sense, but surely consistency does not imply equivalency. I think the intuitions you describe here are motivated by nomic reductionism, and they illustrate the difference between thinking of laws as rules and thinking of them as descriptions.
The entropy of a system can be calculated from the quantum field configurations, so predictions about them are predictions about entropy. This entropy prediction must math that of the laws of thermodynamics, or the laws are inconsistent.
The entropy of a system can be calculated from the quantum field configurations
This is incorrect. Entropy is not only dependent upon the microscopic state of a system, it is also dependent upon our knowledge of that state. If you calculate the entropy based on an exact knowledge of the microscopic state, the entropy will be zero (at least for classical systems; quantum systems introduce complications), which is of course different from the entropy we would calculate based only on knowledge of the macroscopic state of the system. Entropy is not a property that can be simply reduced to fundamental properties in the manner you suggest.
In any case, even if it were true that full knowledge of the microscopic state would allow us to calculate the entropy, it still wouldn’t follow that knowledge of the microscopic laws would allow us to derive the Second Law. The laws only tell us how states evolve over time; they don’t contain information about what the states actually are. So even if the properties of the states are reducible, this does not guarantee that the laws are reducible.
I’m a bit skeptical of your claim that entropy is dependent on your state of knowledge; It’s not what they taught me in my Statistical Mechanics class, and it’s not what my brief skim of Wikipedia indicates. Could you provide a citation or something similar?
Regardless, I’m not sure that matters. Let’s say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final microstates. You then take the entropy of the this system.
I agree that some knowledge of what the states actually are is built into the Second Law. A more careful claim would be that you can derive the Second Law from certain assumptions about initial conditions and from laws I would claim are more fundamental.
I’m a bit skeptical of your claim that entropy is dependent on your state of knowledge; It’s not what they taught me in my Statistical Mechanics class, and it’s not what my brief skim of Wikipedia indicates. Could you provide a citation or something similar?
Sure. See section 5.3 of James Sethna’s excellent textbook for a basic discussion (free PDF version available here). A quote:
“The most general interpretation of entropy is as a measure of our ignorance about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the initial conditions except for the conserved quantities… This interpretation—that entropy is not a property of the system, but of our knowledge about the system (represented by the ensemble of possibilities) -- cleanly resolves many otherwise confusing issues.”
The Szilard engine is a nice illustration of how knowledge of a system can impact how much work is extractable from a system. Here’s a nice experimental demonstration of the same principle (see here for a summary). This is a good book-length treatment of the connection between entropy and knowledge of a system.
Let’s say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final microstates. You then take the entropy of the this system.
Yes, but the prior over initial microstates is doing a lot of work here. For one, it is encoding the appropriate macroproperties. Adding a probability distribution over phase space in order to make the derivation work seems very different from saying that the Second Law is provable from the fundamental laws. If all you have are the fundamental laws and the initial microstate of the universe then you will not be able to derive the Second Law, because the same microscopic trajectory through phase space is compatible with entropy increase, entropy decrease or neither, depending on how you carve up phase space into macrostates.
EDITED TO ADD: Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability distribution won’t change if you follow that procedure, so you won’t recover the Second Law asymmetry. This is a consequence of Liouville’s theorem. In order to get entropy increase, you need a periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
In any case, it is not so clear that even the procedure you propose works. The main account of why the entropy was low in the early universe appeals to the entropy of the gravitational field as compensation for the high thermal entropy of the initial state. As of yet, I haven’t seen any rigorous demonstration of how to apply the standard tools of statistical physics to the gravitational field, such as constructing a phase space which incorporates gravitational degrees of freedom. Hawking and Page attempted to do something like this (I could find you the citation if you like, but I can’t remember it off the top of my head), but they came up with weird results. (ETA: Here’s the paper I was thinking of.) The natural invariant measure over state space turned out not to be normalizable in their model, which means that one could not define sensible probability distributions over it. So I’m not yet convinced that the techniques we apply so fruitfully when it comes to thermal systems can be applied to universe as a whole.
Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability distribution won’t change if you follow that procedure, so you won’t recover the Second Law asymmetry. This is a consequence of Liouville’s theorem. In order to get entropy increase, you need a periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
Dang, you’re right. I’m still not entirely convinced of your point in the original post, but I think I need to do some reading up in order to:
Understand the distinction in approach to the Second Law you’re proposing is not sufficiently explored
See if it seems plausible that this is a result of treating physics as rules instead of descriptions.
This has been an interesting thread; I hope to continue discussing this at some point in the not super-distant future (I’m going to be pretty busy over the next week or so).
It’s self-evident that that two true laws/descriptions can’t give contradictory predictions, but in the example I gave there is no contradiction involved. The laws at the fundamental level are invariant under time reversal, but this does not entail that a universe governed by those laws must be invariant under time reversal, so there’s nothing contradictory about there being another law that is not time reversal invariant.
What do you mean by “yield the same results”? The Second Law makes predictions about the entropy of composite systems. The fundamental laws make predictions about quantum field configurations. These don’t seem like yielding the same results. Of course, the results have to be consistent in some broad sense, but surely consistency does not imply equivalency. I think the intuitions you describe here are motivated by nomic reductionism, and they illustrate the difference between thinking of laws as rules and thinking of them as descriptions.
No. I don’t take it for granted that either law can be reduced to the other one. It is not necessary that the salient patterns at a non-fundamental level of description are merely a consequence of salient patterns at a lower level of descriptions.
Well, yes, if the Second Law holds, then the early universe must have had low entropy, but many physicists don’t think this is a satisfactory explanation by itself. We could explain all kinds of things by appealing to special boundary conditions but usually we like our explanations to be based on regularities in nature. The Weyl curvature hypothesis and spontaneous eternal inflation are attempts to explain why the early universe had low entropy.
Incidentally, while there are many heuristic arguments that the early universe had a low entropy (such as appeal to its homogeneity), I have yet to see a mathematically rigorous argument. The fact is, we don’t really know how to apply the standard tools of statistical mechanics to a system like the early universe.
The entropy of a system can be calculated from the quantum field configurations, so predictions about them are predictions about entropy. This entropy prediction must math that of the laws of thermodynamics, or the laws are inconsistent.
This is incorrect. Entropy is not only dependent upon the microscopic state of a system, it is also dependent upon our knowledge of that state. If you calculate the entropy based on an exact knowledge of the microscopic state, the entropy will be zero (at least for classical systems; quantum systems introduce complications), which is of course different from the entropy we would calculate based only on knowledge of the macroscopic state of the system. Entropy is not a property that can be simply reduced to fundamental properties in the manner you suggest.
In any case, even if it were true that full knowledge of the microscopic state would allow us to calculate the entropy, it still wouldn’t follow that knowledge of the microscopic laws would allow us to derive the Second Law. The laws only tell us how states evolve over time; they don’t contain information about what the states actually are. So even if the properties of the states are reducible, this does not guarantee that the laws are reducible.
I’m a bit skeptical of your claim that entropy is dependent on your state of knowledge; It’s not what they taught me in my Statistical Mechanics class, and it’s not what my brief skim of Wikipedia indicates. Could you provide a citation or something similar?
Regardless, I’m not sure that matters. Let’s say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final microstates. You then take the entropy of the this system.
I agree that some knowledge of what the states actually are is built into the Second Law. A more careful claim would be that you can derive the Second Law from certain assumptions about initial conditions and from laws I would claim are more fundamental.
Sure. See section 5.3 of James Sethna’s excellent textbook for a basic discussion (free PDF version available here). A quote:
“The most general interpretation of entropy is as a measure of our ignorance about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the initial conditions except for the conserved quantities… This interpretation—that entropy is not a property of the system, but of our knowledge about the system (represented by the ensemble of possibilities) -- cleanly resolves many otherwise confusing issues.”
The Szilard engine is a nice illustration of how knowledge of a system can impact how much work is extractable from a system. Here’s a nice experimental demonstration of the same principle (see here for a summary). This is a good book-length treatment of the connection between entropy and knowledge of a system.
Yes, but the prior over initial microstates is doing a lot of work here. For one, it is encoding the appropriate macroproperties. Adding a probability distribution over phase space in order to make the derivation work seems very different from saying that the Second Law is provable from the fundamental laws. If all you have are the fundamental laws and the initial microstate of the universe then you will not be able to derive the Second Law, because the same microscopic trajectory through phase space is compatible with entropy increase, entropy decrease or neither, depending on how you carve up phase space into macrostates.
EDITED TO ADD: Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability distribution won’t change if you follow that procedure, so you won’t recover the Second Law asymmetry. This is a consequence of Liouville’s theorem. In order to get entropy increase, you need a periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
In any case, it is not so clear that even the procedure you propose works. The main account of why the entropy was low in the early universe appeals to the entropy of the gravitational field as compensation for the high thermal entropy of the initial state. As of yet, I haven’t seen any rigorous demonstration of how to apply the standard tools of statistical physics to the gravitational field, such as constructing a phase space which incorporates gravitational degrees of freedom. Hawking and Page attempted to do something like this (I could find you the citation if you like, but I can’t remember it off the top of my head), but they came up with weird results. (ETA: Here’s the paper I was thinking of.) The natural invariant measure over state space turned out not to be normalizable in their model, which means that one could not define sensible probability distributions over it. So I’m not yet convinced that the techniques we apply so fruitfully when it comes to thermal systems can be applied to universe as a whole.
Dang, you’re right. I’m still not entirely convinced of your point in the original post, but I think I need to do some reading up in order to:
Understand the distinction in approach to the Second Law you’re proposing is not sufficiently explored
See if it seems plausible that this is a result of treating physics as rules instead of descriptions.
This has been an interesting thread; I hope to continue discussing this at some point in the not super-distant future (I’m going to be pretty busy over the next week or so).