There’s a widely acknowledged problem involving the Second Law of Thermodynamics. The problem stems from the fact that all known fundamental laws of physics are invariant under time reversal (well, invariant under CPT, to be more accurate) while the Second Law (a non-fundamental law) is not. Now, why is the symmetry at the fundamental level regarded as being in tension with the asymmetry at the non-fundamental level? It is not true that solutions to symmetric equations must generically share those same symmetries. In fact, the opposite is true. It can be proved that generic solutions of systems of partial differential equations have fewer symmetries than the equations. So it’s not like we should expect that a generic universe describable by time-reversal symmetric laws will also be time-reversal symmetric at every level of description. So what’s the source of the worry then?
I think it comes from a commitment to nomic reductionism. The Second Law is, well, a law. But if you really believe that laws are rules, there is no room for autonomous laws at non-fundamental levels of description. The law-likeness, or “ruliness”, of any such law must really stem from the fundamental laws. Otherwise you have overdetermination of physical behavior. Here’s a rhetorical question taken from a paper on the problem: “What grounds the lawfulness of entropy increase, if not the underlying dynamical laws, the laws governing the world’s fundamental physical ontology?” The question immediately reveals two assumptions associated with thinking of laws as rules: the lawfulness of a non-fundamental law must be “grounded” in something, and this grounding can only conceivably come from the fundamental laws.
So we get a number of attempts to explain the lawfulness of the Second Law by expanding the set of fundamental laws, Examples include Penrose’s Weyl curvature hypothesis and Carroll and Chen’s spontaneous eternal inflation model. These hypotheses are constructed specifically to account for lawful entropy increase. Now nobody thinks, “The lawfulness of quantum field theory needs grounding. Can I come up with an elaborate hypothesis whose express purpose is accounting for why it is lawful?” (EDIT: Bad example. See this comment) The lawfulness of fundamental laws is not seen as requiring grounding in the same way as non-fundamental laws. If you think of laws as descriptions rather than rules, this starts to look like an unjustified double standard. Why would macroscopic patterns require grounding in a way that microscopic patterns do not?
I can’t fully convey my own take on the Second Law issue in a comment, but I can give a gist. The truth of the Second Law depends on the particular manner in which we partition phase space into macrostates. For the same microscopic trajectory through phase space, different partitions will deliver different conclusions about entropy. We could partition phase space so that entropy decreases monotonically (for some finite length of time), increases monotonically, or exhibits no monotonic trend. And this is true for any microscopic trajectory through any phase space. So the existence of some partition according to which the Second Law is true is no surprise. What does require explanation is why this is the natural partition. But which partition is natural is explained by our epistemic and causal capacities. The natural macrostates are the ones which group together microstates which said capacities cannot distinguish and separate microstates which they can. So what needs to be explained is why our capacities are structured so as to carve up phase space in a manner that leads to the Second Law. But this is partly a question about us, and it’s the sort of question that invites an answer based on an observation selection effect—something like “Agency is only possible if the system’s capacities are structured so as to carve up its environment in this manner.” My view is that the asymmetry of the Second Law is a consequence of an asymmetry in agency—the temporal direction in which agents can form and read reliable records about a system’s state must differ from the temporal direction in which an agent’s action can alter a system’s state. I could say a lot more here but I won’t.
The point is that this sort of explanation is very different from the kind that most physicists are pursuing. I’m not saying it’s definitely the right tack to pursue, but it is weird to me that it basically hasn’t been pursued at all. And I think the reason for that is that it isn’t the kind of grounding that the prescriptive viewpoint leads one to demand. So implicit adherence to this viewpoint has in this case led to a promising line of inquiry being largely ignored.
I think it comes from a commitment to nomic reductionism. The Second Law is, well, a law. But if you really believe that laws are rules, there is no room for autonomous laws at non-fundamental levels of description. The law-likeness, or “ruliness”, of any such law must really stem from the fundamental laws. Otherwise you have overdetermination of physical behavior. Here’s a rhetorical question taken from a paper on the problem: “What grounds the lawfulness of entropy increase, if not the underlying dynamical laws, the laws governing the world’s fundamental physical ontology?” The question immediately reveals two assumptions associated with thinking of laws as rules: the lawfulness of a non-fundamental law must be “grounded” in something, and this grounding can only conceivably come from the fundamental laws.
Yes. One might worry that the second law, which is clearly not fundamental, doesn’t seem to be grounded in a fundamental law. The usual solution to this is to realize that we are forgetting an important fundamental law, namely the boundary conditions on the universe. Then we realize that the non-fundamental law of entropy increase is grounded in the fundamental law that gives the initial conditions of the universe. I don’t think this is “[coming] up with an elaborate hypothesis whose express purpose is accounting for why [the second law] is lawful,” as you seem to imply. Even if we didn’t need to explain the second law we would expect the fundamental laws to specify the initial conditions of the universe. The second law is just one of the observations that provide evidence about what those initial conditions must have been.
I think it comes from a commitment to nomic reductionism. The Second Law is, well, a law. But if you really believe that laws are rules, there is no room for autonomous laws at non-fundamental levels of description. The law-likeness, or “ruliness”, of any such law must really stem from the fundamental laws. Otherwise you have overdetermination of physical behavior.
I think this is near to the core of our disagreement. It seems self-evident that two true laws/descriptions cannot give different predictions about the same system; otherwise, they would not both be true. If two mathematical objects (as laws of physics tend to be) always yield the same results, it seems natural to try and prove their equivalence. For example, when I learned Lagrangian mechanics in physics class, we proved it equivilent to Newtonian mechanics.
So the question arises, “why should the Second Law of Thermodynamics be proved in terms of more “fundamental” laws, rather than the other way around?” (this, if I’m interpreting you correctly, is the double standard). This is simply because the Second Law’s domain in which it can make predictions is much smaller than that of more fundamental laws. The second law of thermodynamics is silent about what happens when I dribble a ball; Newton’s laws are not. As such, one proves the Seccond law in terms of non-thermodynamic laws. “Fundamentalness” seems to simply be a description of domain of applicability.
I’m not qualified to assess the validity of the Weyl curvature hypothesis or of the spontaneous eternal inflation model. However, I’ve always understood that the increase in entropy is simply caused by the boundry conditions of the universe, not any time-asymmetry of the laws of physics.
I think this is near to the core of our disagreement. It seems self-evident that two true laws/descriptions cannot give different predictions about the same system; otherwise, they would not both be true.
It’s self-evident that that two true laws/descriptions can’t give contradictory predictions, but in the example I gave there is no contradiction involved. The laws at the fundamental level are invariant under time reversal, but this does not entail that a universe governed by those laws must be invariant under time reversal, so there’s nothing contradictory about there being another law that is not time reversal invariant.
If two mathematical objects (as laws of physics tend to be) always yield the same results, it seems natural to try and prove their equivalence.
What do you mean by “yield the same results”? The Second Law makes predictions about the entropy of composite systems. The fundamental laws make predictions about quantum field configurations. These don’t seem like yielding the same results. Of course, the results have to be consistent in some broad sense, but surely consistency does not imply equivalency. I think the intuitions you describe here are motivated by nomic reductionism, and they illustrate the difference between thinking of laws as rules and thinking of them as descriptions.
So the question arises, “why should the Second Law of Thermodynamics be proved in terms of more “fundamental” laws, rather than the other way around?” (this, if I’m interpreting you correctly, is the double standard)
No. I don’t take it for granted that either law can be reduced to the other one. It is not necessary that the salient patterns at a non-fundamental level of description are merely a consequence of salient patterns at a lower level of descriptions.
I’m not qualified to assess the validity of the Weyl curvature hypothesis or of the spontaneous eternal inflation model. However, I’ve always understood that the increase in entropy is simply caused by the boundry conditions of the universe, not any time-asymmetry of the laws of physics.
Well, yes, if the Second Law holds, then the early universe must have had low entropy, but many physicists don’t think this is a satisfactory explanation by itself. We could explain all kinds of things by appealing to special boundary conditions but usually we like our explanations to be based on regularities in nature. The Weyl curvature hypothesis and spontaneous eternal inflation are attempts to explain why the early universe had low entropy.
Incidentally, while there are many heuristic arguments that the early universe had a low entropy (such as appeal to its homogeneity), I have yet to see a mathematically rigorous argument. The fact is, we don’t really know how to apply the standard tools of statistical mechanics to a system like the early universe.
What do you mean by “yield the same results”? The Second Law makes predictions about the entropy of composite systems. The fundamental laws make predictions about quantum field configurations. These don’t seem like yielding the same results. Of course, the results have to be consistent in some broad sense, but surely consistency does not imply equivalency. I think the intuitions you describe here are motivated by nomic reductionism, and they illustrate the difference between thinking of laws as rules and thinking of them as descriptions.
The entropy of a system can be calculated from the quantum field configurations, so predictions about them are predictions about entropy. This entropy prediction must math that of the laws of thermodynamics, or the laws are inconsistent.
The entropy of a system can be calculated from the quantum field configurations
This is incorrect. Entropy is not only dependent upon the microscopic state of a system, it is also dependent upon our knowledge of that state. If you calculate the entropy based on an exact knowledge of the microscopic state, the entropy will be zero (at least for classical systems; quantum systems introduce complications), which is of course different from the entropy we would calculate based only on knowledge of the macroscopic state of the system. Entropy is not a property that can be simply reduced to fundamental properties in the manner you suggest.
In any case, even if it were true that full knowledge of the microscopic state would allow us to calculate the entropy, it still wouldn’t follow that knowledge of the microscopic laws would allow us to derive the Second Law. The laws only tell us how states evolve over time; they don’t contain information about what the states actually are. So even if the properties of the states are reducible, this does not guarantee that the laws are reducible.
I’m a bit skeptical of your claim that entropy is dependent on your state of knowledge; It’s not what they taught me in my Statistical Mechanics class, and it’s not what my brief skim of Wikipedia indicates. Could you provide a citation or something similar?
Regardless, I’m not sure that matters. Let’s say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final microstates. You then take the entropy of the this system.
I agree that some knowledge of what the states actually are is built into the Second Law. A more careful claim would be that you can derive the Second Law from certain assumptions about initial conditions and from laws I would claim are more fundamental.
I’m a bit skeptical of your claim that entropy is dependent on your state of knowledge; It’s not what they taught me in my Statistical Mechanics class, and it’s not what my brief skim of Wikipedia indicates. Could you provide a citation or something similar?
Sure. See section 5.3 of James Sethna’s excellent textbook for a basic discussion (free PDF version available here). A quote:
“The most general interpretation of entropy is as a measure of our ignorance about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the initial conditions except for the conserved quantities… This interpretation—that entropy is not a property of the system, but of our knowledge about the system (represented by the ensemble of possibilities) -- cleanly resolves many otherwise confusing issues.”
The Szilard engine is a nice illustration of how knowledge of a system can impact how much work is extractable from a system. Here’s a nice experimental demonstration of the same principle (see here for a summary). This is a good book-length treatment of the connection between entropy and knowledge of a system.
Let’s say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final microstates. You then take the entropy of the this system.
Yes, but the prior over initial microstates is doing a lot of work here. For one, it is encoding the appropriate macroproperties. Adding a probability distribution over phase space in order to make the derivation work seems very different from saying that the Second Law is provable from the fundamental laws. If all you have are the fundamental laws and the initial microstate of the universe then you will not be able to derive the Second Law, because the same microscopic trajectory through phase space is compatible with entropy increase, entropy decrease or neither, depending on how you carve up phase space into macrostates.
EDITED TO ADD: Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability distribution won’t change if you follow that procedure, so you won’t recover the Second Law asymmetry. This is a consequence of Liouville’s theorem. In order to get entropy increase, you need a periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
In any case, it is not so clear that even the procedure you propose works. The main account of why the entropy was low in the early universe appeals to the entropy of the gravitational field as compensation for the high thermal entropy of the initial state. As of yet, I haven’t seen any rigorous demonstration of how to apply the standard tools of statistical physics to the gravitational field, such as constructing a phase space which incorporates gravitational degrees of freedom. Hawking and Page attempted to do something like this (I could find you the citation if you like, but I can’t remember it off the top of my head), but they came up with weird results. (ETA: Here’s the paper I was thinking of.) The natural invariant measure over state space turned out not to be normalizable in their model, which means that one could not define sensible probability distributions over it. So I’m not yet convinced that the techniques we apply so fruitfully when it comes to thermal systems can be applied to universe as a whole.
Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability distribution won’t change if you follow that procedure, so you won’t recover the Second Law asymmetry. This is a consequence of Liouville’s theorem. In order to get entropy increase, you need a periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
Dang, you’re right. I’m still not entirely convinced of your point in the original post, but I think I need to do some reading up in order to:
Understand the distinction in approach to the Second Law you’re proposing is not sufficiently explored
See if it seems plausible that this is a result of treating physics as rules instead of descriptions.
This has been an interesting thread; I hope to continue discussing this at some point in the not super-distant future (I’m going to be pretty busy over the next week or so).
Thanks for that comment, I very much enjoy these topics.
“Agency is only possible if the system’s capacities are structured so as to carve up its environment in this manner.”
Why would we not be able to accurately describe and process the occasional phenomenon that went counter to the Second Law?
Intermittent decreases in entropy might even make the evolution of complex brains more likely, at least it does not make the existence of agents such as us less likely prima facie. If you want to rely on the Anthropic Principle, you’d need to establish why it would prefer such strict adherence to the Second Law.
Are you familiar with Smolin’s paper on the AP? “It is explained in detail why the Anthropic Principle (AP) cannot yield any falsifiable predictions, and therefore cannot be a part of science.” For a rebuttal see the Smolin Susskind dialogue here.
Even if there were a case to be made that agency would only be possible if the partition generally follows the Second Law, it would be outright unexpected for the partition to follow it as strictly as we assume it does.
Out of the myriad trajectories through phase space, why would the one perfectly (in the sense of as yet unfalsified) mimicking the Second Law be taken? There could surely exist agencies if there were just a general, or even very close, correspondence. Which would be vastly more likely for us to observe, if we were iid chosen from all such worlds with agency (self sampling assumption).
I am familiar with Smolin’s objections, but I don’t buy them. His argument hinges on accepting an outmoded Popperian philosophy of science. I don’t think it holds if one adopts a properly Bayesian perspective. In any case, I think my particular form of anthropic argument counts as a selection effect within one world, a form of argument to which even he doesn’t object.
As for the ubiquity of Second Law-obeying systems, I admit it is something I have thought about and it does worry me a little. I don’t have a fully worked response, but here’s a tentative answer: If there were the occasional spontaneously entropy decreasing macroscopic system in our environment, the entropy decrease would be very difficult to corral. As long as such a system could interact with other systems, we could use it to extract work from those other systems as well. And, as I said, if most of the systems in our environment were not Second Law-obeying, then we could not exercise our agency by learning about them and acting on them based on what we learn. So perhaps there’s a kind of instability to the situation where a few systems don’t obey the Second Law while the rest do that explains why this is not the situation we’re in.
So what needs to be explained is why our capacities are structured so as to carve up phase space in a manner that leads to the Second Law. But this is partly a question about us, and it’s the sort of question that invites an answer based on an observation selection effect—something like “Agency is only possible if the system’s capacities are structured so as to carve up its environment in this manner.” My view is that the asymmetry of the Second Law is a consequence of an asymmetry in agency—the temporal direction in which agents can form and read reliable records about a system’s state must differ from the temporal direction in which an agent’s action can alter a system’s state.
Interesting idea, but doesn’t it lead to something akin to the Boltzmann Brain problem? This asymmetry would hold for an agent’s brain and its close environment, but I don’t see a reason why it should hold in the same way for the wider universe. So shouldn’t we predict that when we make new observations with information coming from outside our previous past lightcone, we will not see the same Second Law holding? Or maybe I have misunderstood you completely...
The Boltzmann brain problem usually arises when your model assigns a probability distribution over the universal phase space according to which an arbitrary observer is more likely to be a Boltzmann brain than an ordinary observer. There are various reasons why my model does not succumb to this probabilistic kind of Boltzmann brain problem which I’d be happy to go into if you desire.
However, your particular concern seems to be of a different kind. It’s not that Boltzmann brains are more likely according to the model, it is that the model gives no reason to suppose that we are not Boltzmann brains. The model does not tell us why we should expect macroscopic regularities to continue to hold outside our immediate environment. Is this an accurate assessment of your worry? If it is, I think it is demanding too much of a physical model. You are essentially asking for a solution to the problem of induction, I think. My view is that we should expect (certain) macroscopic regularities to persist for the same sorts of reasons that we expect microscopic regularities to persist. Of course, if there were specific probabilistic arguments against the persistence of macroscopic regularities, I would have a problem. But like I said above, those don’t arise for my model the same way they do for Boltzmann’s.
Yes, your second paragraph gets at what I was thinking (and you are right that it is not exactly the Boltzmann Brain problem). But I don’t think it is the same as the general problem of induction, either.
On your model, if I understand correctly, there are microscopic, time symmetric laws that hold everywhere. (That they hold everywhere and not just on our experience we take for granted—we are not allowing Humean worries about induction while doing physics, and that’s fine.) But on top of that, there is a macroscopic law that we observe, the Second Law, and you are proposing (I think—maybe I misunderstand you) that its explanation lies in that we are agents and observers, and that the immediate environment of a system that is an agent and observer must exhibit this kind of time asymmetry. But then, we should not expect this macroscopic regularity to hold beyond our immediate environment. I think this is ordinary scientific reasoning, not Humean skepticism.
Do you have a similar concern about Tegmark’s anthropic argument for the microscopic laws? It only establishes that we must be in a universe where our immediate environment follows those laws, not that those laws hold everywhere in the universe.
The Second Law includes the definition of the partitions to which it applies- it specifically allows ‘local’ reductions in entropy, but for any partition which exhibits a local decrease in entropy, the complementary partition exhibits a greater total increase in entropy.
If you construct your partition creatively, consider the complementary partition which you are also constructing?
I think we’re using the word “partition” in two different senses. When I talk about a partition of phase space, I’m referring to this notion. I’m not sure exactly what you’re referring to.
The partition isn’t over Newtonian space, it’s over phase space, a space where every point represents an entire dynamical state of the system. If there are N particles in the system, and the particles have no internal degrees of freedom, phase space will have 6N dimensions, 3N for position and 3N for momentum. A partition over phase space is a division of the space into mutually exclusive sub-regions that collectively exhaust the space. Each of these sub-regions is associated with a macrostate of the system. Basically you’re grouping together all the microscopic dynamical configurations that are macroscopically indistinguishable.
Now, describe a state in which entropy of an isolated system will decrease over some time period. Calculate entropy at the same level of abstraction as you are describing the system; (if you describe temperature as temperature, use temperature. If you describe energy states of electrons and velocities of particles, use those instead of temperature calculate entropy.
When I checked post-Newtonian physics last, I didn’t see the laws of thermodynamics included. Clearly some of the conservation rules don’t apply in the absence of others which have been provably violated; momentum isn’t conserved when mass isn’t conserved, for example.
The entropy of a closed system in equilibrium is given by the logarithm of the volume of the region of phase space corresponding to the system’s macrostate. So if we partition phase space differently, so that the macrostates are different, judgments about the entropy of particular microstates will change. Now, according to our ordinary partitioning of phase space, the macrostate associated with an isolated system’s initial microstate will not have a larger volume than the macrostate associated with its final volume. However, this is due to the partition, not just the system’s actual microscopic trajectory. With a different partition, the same microscopic trajectory will start in a macrostate of higher entropy and evolve to a macrostate of lower entropy.
Of course, this latter partition will not correspond nicely with any of the macroproperties (such as, say, system volume) that we work with. This is what I meant when I called it unnatural. But its unnaturalness has to do with the way we are constructed. Nature doesn’t come pre-equipped with a list of the right macroproperties.
Here’s an example: Put a drop of ink in a glass of water. The ink will gradually spread out through the water. This is a process in which entropy increases. There are many different ways the ink could initially be dropped into the water (on the right or left side of the cup, for instance), and we can distinguish between these different ways just by looking. As the ink spreads out, we are no longer able to distinguish between different spread out configurations. Even though we know that dropping the ink on the right side must lead to a microscopic spread out configuration different from the one we would obtain by dropping the ink on the left side, these configurations are not macroscopically distinguishable once the ink has spread out enough. They both just look like ink uniformly spread throughout the water. This is characteristic of entropy increase: macroscopically available distinctions get suppressed. We lose macroscopic information about the system.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
No, you can’t redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can’t use a different scale to determine macrostate than you do for entropy.
Of course, to discuss a system not in equilibrium, you need to use formulas that apply to systems that aren’t in equilibrium. The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
No, you can’t redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can’t use a different scale to determine macrostate than you do for entropy.
We still seem to be talking past each other. Neither of these is an accurate description of what I’m doing. In fact, I’m not even sure what you mean here. I still suspect you haven’t understood what I mean when I talk about a partition of phase space. Maybe you could clarify how you’re interpreting the concept?
The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
Yes, I recognize this. None of what I said about my example relies on the process being quasistatic. Of course, if the system isn’t in equilibrium, it’s entropy isn’t directly measurable as the volume of the corresponding macroregion, but it is the Shannon entropy of a probability distribution that only has support within the macroregion (ie. it vanishes outside the macroregion). The difference from equilibrium is that the distribution won’t be uniform within the relevant macroregion. It is still the case, though, that a distribution spread out over a much larger macroregion will in general have a higher entropy than one spread out over a smaller volume, so using volume in phase space as a proxy for entropy still works.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
Fair enough. My use of the word “closed” was sloppy. Don’t see how this affects the point though.
Now you’ve put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water. One response is that they have virtually identical entropy. That’s also the correct answer, since the isolated system of the container of water reaches a maximum entropy when temperature is equalized and the ink fully diffuse. The ink does not spontaneously concentrate back into a drop, despite the very small drop in entropy.
Now you’ve put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water.
How so? Again, I really suspect that you are misunderstanding my position, because various commitments you attribute to me do not look at all familiar. I can’t isolate the source of the misunderstanding (if one exists) unless you give me a clear account of what you take me to be saying.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
This is where you tried to define the entropy of diffuse ink to be lower.
The highest entropy phase state is the one in which the constraints on each variable are least restrictive. That means that the state where each ink can be in any position within the glass is (other things being equal) higher entropy than a state where each ink is constrained to be in a small area.
Entropy is a physical property similar to temperature, in that at a certain level it becomes momentum. If you view a closed Carnot cycle, you will note that the source loses heat, and the sink gains heat, and that the source must be hotter than the sink. There being no method by which the coldest sink can be made colder, nor by which the total energy can be increased, the gap can only decrease.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.
But its unnaturalness has to do with the way we are constructed. Nature doesn’t come pre-equipped with a list of the right macroproperties.
It seems likely to me that the the laws of motion governing the time evolution of microstates has something to do with determining the “right” macroproperties—that is, the ones that lead to reproducible states and processes on the macro scale. (Something to do with coarse-graining, maybe?) Then natural selection filters for organisms that take advantage of these macro regularities.
Now nobody thinks, “The lawfulness of quantum field theory needs grounding
Isn’t that exactly what hidden-variable theories try to do? There have been a lot of people dissatisfied with the probabilistic nature of quantum mechanics, and have sought something more fundamental to explain the probabilities.
Hidden variable theories are not an attempt to ground the lawfulness of quantum mechanics. The Schrodinger equation isn’t reduced to something deeper in Bohmian mechanics. It appears as a basic unexplained law in the theory, just as it does in orthodox interpretations of QM. The motivation behind hidden variable theories is to repair purported conceptual defects in standard presentations of QM, not to account for the existence of the laws of QM.
I do think my claim is wrong, though. People do ask what grounds quantum field theory. In fact, that’s a pretty common question. But that’s mainly because people now realize that our QFTs are only effective theories, valid above a certain length scale. So the question is motivated by pretty much the same sort of reductionist viewpoint that leads people to question how the lawfulness of the Second Law is grounded.
There’s a widely acknowledged problem involving the Second Law of Thermodynamics. The problem stems from the fact that all known fundamental laws of physics are invariant under time reversal (well, invariant under CPT, to be more accurate) while the Second Law (a non-fundamental law) is not. Now, why is the symmetry at the fundamental level regarded as being in tension with the asymmetry at the non-fundamental level? It is not true that solutions to symmetric equations must generically share those same symmetries. In fact, the opposite is true. It can be proved that generic solutions of systems of partial differential equations have fewer symmetries than the equations. So it’s not like we should expect that a generic universe describable by time-reversal symmetric laws will also be time-reversal symmetric at every level of description. So what’s the source of the worry then?
I think it comes from a commitment to nomic reductionism. The Second Law is, well, a law. But if you really believe that laws are rules, there is no room for autonomous laws at non-fundamental levels of description. The law-likeness, or “ruliness”, of any such law must really stem from the fundamental laws. Otherwise you have overdetermination of physical behavior. Here’s a rhetorical question taken from a paper on the problem: “What grounds the lawfulness of entropy increase, if not the underlying dynamical laws, the laws governing the world’s fundamental physical ontology?” The question immediately reveals two assumptions associated with thinking of laws as rules: the lawfulness of a non-fundamental law must be “grounded” in something, and this grounding can only conceivably come from the fundamental laws.
So we get a number of attempts to explain the lawfulness of the Second Law by expanding the set of fundamental laws, Examples include Penrose’s Weyl curvature hypothesis and Carroll and Chen’s spontaneous eternal inflation model. These hypotheses are constructed specifically to account for lawful entropy increase. Now nobody thinks, “The lawfulness of quantum field theory needs grounding. Can I come up with an elaborate hypothesis whose express purpose is accounting for why it is lawful?” (EDIT: Bad example. See this comment) The lawfulness of fundamental laws is not seen as requiring grounding in the same way as non-fundamental laws. If you think of laws as descriptions rather than rules, this starts to look like an unjustified double standard. Why would macroscopic patterns require grounding in a way that microscopic patterns do not?
I can’t fully convey my own take on the Second Law issue in a comment, but I can give a gist. The truth of the Second Law depends on the particular manner in which we partition phase space into macrostates. For the same microscopic trajectory through phase space, different partitions will deliver different conclusions about entropy. We could partition phase space so that entropy decreases monotonically (for some finite length of time), increases monotonically, or exhibits no monotonic trend. And this is true for any microscopic trajectory through any phase space. So the existence of some partition according to which the Second Law is true is no surprise. What does require explanation is why this is the natural partition. But which partition is natural is explained by our epistemic and causal capacities. The natural macrostates are the ones which group together microstates which said capacities cannot distinguish and separate microstates which they can. So what needs to be explained is why our capacities are structured so as to carve up phase space in a manner that leads to the Second Law. But this is partly a question about us, and it’s the sort of question that invites an answer based on an observation selection effect—something like “Agency is only possible if the system’s capacities are structured so as to carve up its environment in this manner.” My view is that the asymmetry of the Second Law is a consequence of an asymmetry in agency—the temporal direction in which agents can form and read reliable records about a system’s state must differ from the temporal direction in which an agent’s action can alter a system’s state. I could say a lot more here but I won’t.
The point is that this sort of explanation is very different from the kind that most physicists are pursuing. I’m not saying it’s definitely the right tack to pursue, but it is weird to me that it basically hasn’t been pursued at all. And I think the reason for that is that it isn’t the kind of grounding that the prescriptive viewpoint leads one to demand. So implicit adherence to this viewpoint has in this case led to a promising line of inquiry being largely ignored.
Probably because it does not have testable consequences?
Yes it does. For one it predicts that the explanations being pursued by physicists are likely to turn out to be false.
Yes. One might worry that the second law, which is clearly not fundamental, doesn’t seem to be grounded in a fundamental law. The usual solution to this is to realize that we are forgetting an important fundamental law, namely the boundary conditions on the universe. Then we realize that the non-fundamental law of entropy increase is grounded in the fundamental law that gives the initial conditions of the universe. I don’t think this is “[coming] up with an elaborate hypothesis whose express purpose is accounting for why [the second law] is lawful,” as you seem to imply. Even if we didn’t need to explain the second law we would expect the fundamental laws to specify the initial conditions of the universe. The second law is just one of the observations that provide evidence about what those initial conditions must have been.
First of all, thank you for your detailed reply.
I think this is near to the core of our disagreement. It seems self-evident that two true laws/descriptions cannot give different predictions about the same system; otherwise, they would not both be true. If two mathematical objects (as laws of physics tend to be) always yield the same results, it seems natural to try and prove their equivalence. For example, when I learned Lagrangian mechanics in physics class, we proved it equivilent to Newtonian mechanics.
So the question arises, “why should the Second Law of Thermodynamics be proved in terms of more “fundamental” laws, rather than the other way around?” (this, if I’m interpreting you correctly, is the double standard). This is simply because the Second Law’s domain in which it can make predictions is much smaller than that of more fundamental laws. The second law of thermodynamics is silent about what happens when I dribble a ball; Newton’s laws are not. As such, one proves the Seccond law in terms of non-thermodynamic laws. “Fundamentalness” seems to simply be a description of domain of applicability.
I’m not qualified to assess the validity of the Weyl curvature hypothesis or of the spontaneous eternal inflation model. However, I’ve always understood that the increase in entropy is simply caused by the boundry conditions of the universe, not any time-asymmetry of the laws of physics.
It’s self-evident that that two true laws/descriptions can’t give contradictory predictions, but in the example I gave there is no contradiction involved. The laws at the fundamental level are invariant under time reversal, but this does not entail that a universe governed by those laws must be invariant under time reversal, so there’s nothing contradictory about there being another law that is not time reversal invariant.
What do you mean by “yield the same results”? The Second Law makes predictions about the entropy of composite systems. The fundamental laws make predictions about quantum field configurations. These don’t seem like yielding the same results. Of course, the results have to be consistent in some broad sense, but surely consistency does not imply equivalency. I think the intuitions you describe here are motivated by nomic reductionism, and they illustrate the difference between thinking of laws as rules and thinking of them as descriptions.
No. I don’t take it for granted that either law can be reduced to the other one. It is not necessary that the salient patterns at a non-fundamental level of description are merely a consequence of salient patterns at a lower level of descriptions.
Well, yes, if the Second Law holds, then the early universe must have had low entropy, but many physicists don’t think this is a satisfactory explanation by itself. We could explain all kinds of things by appealing to special boundary conditions but usually we like our explanations to be based on regularities in nature. The Weyl curvature hypothesis and spontaneous eternal inflation are attempts to explain why the early universe had low entropy.
Incidentally, while there are many heuristic arguments that the early universe had a low entropy (such as appeal to its homogeneity), I have yet to see a mathematically rigorous argument. The fact is, we don’t really know how to apply the standard tools of statistical mechanics to a system like the early universe.
The entropy of a system can be calculated from the quantum field configurations, so predictions about them are predictions about entropy. This entropy prediction must math that of the laws of thermodynamics, or the laws are inconsistent.
This is incorrect. Entropy is not only dependent upon the microscopic state of a system, it is also dependent upon our knowledge of that state. If you calculate the entropy based on an exact knowledge of the microscopic state, the entropy will be zero (at least for classical systems; quantum systems introduce complications), which is of course different from the entropy we would calculate based only on knowledge of the macroscopic state of the system. Entropy is not a property that can be simply reduced to fundamental properties in the manner you suggest.
In any case, even if it were true that full knowledge of the microscopic state would allow us to calculate the entropy, it still wouldn’t follow that knowledge of the microscopic laws would allow us to derive the Second Law. The laws only tell us how states evolve over time; they don’t contain information about what the states actually are. So even if the properties of the states are reducible, this does not guarantee that the laws are reducible.
I’m a bit skeptical of your claim that entropy is dependent on your state of knowledge; It’s not what they taught me in my Statistical Mechanics class, and it’s not what my brief skim of Wikipedia indicates. Could you provide a citation or something similar?
Regardless, I’m not sure that matters. Let’s say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final microstates. You then take the entropy of the this system.
I agree that some knowledge of what the states actually are is built into the Second Law. A more careful claim would be that you can derive the Second Law from certain assumptions about initial conditions and from laws I would claim are more fundamental.
Sure. See section 5.3 of James Sethna’s excellent textbook for a basic discussion (free PDF version available here). A quote:
“The most general interpretation of entropy is as a measure of our ignorance about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the initial conditions except for the conserved quantities… This interpretation—that entropy is not a property of the system, but of our knowledge about the system (represented by the ensemble of possibilities) -- cleanly resolves many otherwise confusing issues.”
The Szilard engine is a nice illustration of how knowledge of a system can impact how much work is extractable from a system. Here’s a nice experimental demonstration of the same principle (see here for a summary). This is a good book-length treatment of the connection between entropy and knowledge of a system.
Yes, but the prior over initial microstates is doing a lot of work here. For one, it is encoding the appropriate macroproperties. Adding a probability distribution over phase space in order to make the derivation work seems very different from saying that the Second Law is provable from the fundamental laws. If all you have are the fundamental laws and the initial microstate of the universe then you will not be able to derive the Second Law, because the same microscopic trajectory through phase space is compatible with entropy increase, entropy decrease or neither, depending on how you carve up phase space into macrostates.
EDITED TO ADD: Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability distribution won’t change if you follow that procedure, so you won’t recover the Second Law asymmetry. This is a consequence of Liouville’s theorem. In order to get entropy increase, you need a periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
In any case, it is not so clear that even the procedure you propose works. The main account of why the entropy was low in the early universe appeals to the entropy of the gravitational field as compensation for the high thermal entropy of the initial state. As of yet, I haven’t seen any rigorous demonstration of how to apply the standard tools of statistical physics to the gravitational field, such as constructing a phase space which incorporates gravitational degrees of freedom. Hawking and Page attempted to do something like this (I could find you the citation if you like, but I can’t remember it off the top of my head), but they came up with weird results. (ETA: Here’s the paper I was thinking of.) The natural invariant measure over state space turned out not to be normalizable in their model, which means that one could not define sensible probability distributions over it. So I’m not yet convinced that the techniques we apply so fruitfully when it comes to thermal systems can be applied to universe as a whole.
Dang, you’re right. I’m still not entirely convinced of your point in the original post, but I think I need to do some reading up in order to:
Understand the distinction in approach to the Second Law you’re proposing is not sufficiently explored
See if it seems plausible that this is a result of treating physics as rules instead of descriptions.
This has been an interesting thread; I hope to continue discussing this at some point in the not super-distant future (I’m going to be pretty busy over the next week or so).
Thanks for that comment, I very much enjoy these topics.
Why would we not be able to accurately describe and process the occasional phenomenon that went counter to the Second Law?
Intermittent decreases in entropy might even make the evolution of complex brains more likely, at least it does not make the existence of agents such as us less likely prima facie. If you want to rely on the Anthropic Principle, you’d need to establish why it would prefer such strict adherence to the Second Law.
Are you familiar with Smolin’s paper on the AP? “It is explained in detail why the Anthropic Principle (AP) cannot yield any falsifiable predictions, and therefore cannot be a part of science.” For a rebuttal see the Smolin Susskind dialogue here.
Even if there were a case to be made that agency would only be possible if the partition generally follows the Second Law, it would be outright unexpected for the partition to follow it as strictly as we assume it does.
Out of the myriad trajectories through phase space, why would the one perfectly (in the sense of as yet unfalsified) mimicking the Second Law be taken? There could surely exist agencies if there were just a general, or even very close, correspondence. Which would be vastly more likely for us to observe, if we were iid chosen from all such worlds with agency (self sampling assumption).
I am familiar with Smolin’s objections, but I don’t buy them. His argument hinges on accepting an outmoded Popperian philosophy of science. I don’t think it holds if one adopts a properly Bayesian perspective. In any case, I think my particular form of anthropic argument counts as a selection effect within one world, a form of argument to which even he doesn’t object.
As for the ubiquity of Second Law-obeying systems, I admit it is something I have thought about and it does worry me a little. I don’t have a fully worked response, but here’s a tentative answer: If there were the occasional spontaneously entropy decreasing macroscopic system in our environment, the entropy decrease would be very difficult to corral. As long as such a system could interact with other systems, we could use it to extract work from those other systems as well. And, as I said, if most of the systems in our environment were not Second Law-obeying, then we could not exercise our agency by learning about them and acting on them based on what we learn. So perhaps there’s a kind of instability to the situation where a few systems don’t obey the Second Law while the rest do that explains why this is not the situation we’re in.
Interesting idea, but doesn’t it lead to something akin to the Boltzmann Brain problem? This asymmetry would hold for an agent’s brain and its close environment, but I don’t see a reason why it should hold in the same way for the wider universe. So shouldn’t we predict that when we make new observations with information coming from outside our previous past lightcone, we will not see the same Second Law holding? Or maybe I have misunderstood you completely...
The Boltzmann brain problem usually arises when your model assigns a probability distribution over the universal phase space according to which an arbitrary observer is more likely to be a Boltzmann brain than an ordinary observer. There are various reasons why my model does not succumb to this probabilistic kind of Boltzmann brain problem which I’d be happy to go into if you desire.
However, your particular concern seems to be of a different kind. It’s not that Boltzmann brains are more likely according to the model, it is that the model gives no reason to suppose that we are not Boltzmann brains. The model does not tell us why we should expect macroscopic regularities to continue to hold outside our immediate environment. Is this an accurate assessment of your worry? If it is, I think it is demanding too much of a physical model. You are essentially asking for a solution to the problem of induction, I think. My view is that we should expect (certain) macroscopic regularities to persist for the same sorts of reasons that we expect microscopic regularities to persist. Of course, if there were specific probabilistic arguments against the persistence of macroscopic regularities, I would have a problem. But like I said above, those don’t arise for my model the same way they do for Boltzmann’s.
Yes, your second paragraph gets at what I was thinking (and you are right that it is not exactly the Boltzmann Brain problem). But I don’t think it is the same as the general problem of induction, either.
On your model, if I understand correctly, there are microscopic, time symmetric laws that hold everywhere. (That they hold everywhere and not just on our experience we take for granted—we are not allowing Humean worries about induction while doing physics, and that’s fine.) But on top of that, there is a macroscopic law that we observe, the Second Law, and you are proposing (I think—maybe I misunderstand you) that its explanation lies in that we are agents and observers, and that the immediate environment of a system that is an agent and observer must exhibit this kind of time asymmetry. But then, we should not expect this macroscopic regularity to hold beyond our immediate environment. I think this is ordinary scientific reasoning, not Humean skepticism.
Do you have a similar concern about Tegmark’s anthropic argument for the microscopic laws? It only establishes that we must be in a universe where our immediate environment follows those laws, not that those laws hold everywhere in the universe.
I am not really familiar with the details of Tegmark’s proposal. If your two-sentece summary is accurate, then yes, I would have concerns.
Hmmm… I’m not yet sure how bothered I should be about your worry. Possibly a lot. I’ll have to think about it.
The Second Law includes the definition of the partitions to which it applies- it specifically allows ‘local’ reductions in entropy, but for any partition which exhibits a local decrease in entropy, the complementary partition exhibits a greater total increase in entropy.
If you construct your partition creatively, consider the complementary partition which you are also constructing?
I think we’re using the word “partition” in two different senses. When I talk about a partition of phase space, I’m referring to this notion. I’m not sure exactly what you’re referring to.
How can that be implemented to apply to Newtonian space?
The partition isn’t over Newtonian space, it’s over phase space, a space where every point represents an entire dynamical state of the system. If there are N particles in the system, and the particles have no internal degrees of freedom, phase space will have 6N dimensions, 3N for position and 3N for momentum. A partition over phase space is a division of the space into mutually exclusive sub-regions that collectively exhaust the space. Each of these sub-regions is associated with a macrostate of the system. Basically you’re grouping together all the microscopic dynamical configurations that are macroscopically indistinguishable.
Now, describe a state in which entropy of an isolated system will decrease over some time period. Calculate entropy at the same level of abstraction as you are describing the system; (if you describe temperature as temperature, use temperature. If you describe energy states of electrons and velocities of particles, use those instead of temperature calculate entropy.
When I checked post-Newtonian physics last, I didn’t see the laws of thermodynamics included. Clearly some of the conservation rules don’t apply in the absence of others which have been provably violated; momentum isn’t conserved when mass isn’t conserved, for example.
The entropy of a closed system in equilibrium is given by the logarithm of the volume of the region of phase space corresponding to the system’s macrostate. So if we partition phase space differently, so that the macrostates are different, judgments about the entropy of particular microstates will change. Now, according to our ordinary partitioning of phase space, the macrostate associated with an isolated system’s initial microstate will not have a larger volume than the macrostate associated with its final volume. However, this is due to the partition, not just the system’s actual microscopic trajectory. With a different partition, the same microscopic trajectory will start in a macrostate of higher entropy and evolve to a macrostate of lower entropy.
Of course, this latter partition will not correspond nicely with any of the macroproperties (such as, say, system volume) that we work with. This is what I meant when I called it unnatural. But its unnaturalness has to do with the way we are constructed. Nature doesn’t come pre-equipped with a list of the right macroproperties.
Here’s an example: Put a drop of ink in a glass of water. The ink will gradually spread out through the water. This is a process in which entropy increases. There are many different ways the ink could initially be dropped into the water (on the right or left side of the cup, for instance), and we can distinguish between these different ways just by looking. As the ink spreads out, we are no longer able to distinguish between different spread out configurations. Even though we know that dropping the ink on the right side must lead to a microscopic spread out configuration different from the one we would obtain by dropping the ink on the left side, these configurations are not macroscopically distinguishable once the ink has spread out enough. They both just look like ink uniformly spread throughout the water. This is characteristic of entropy increase: macroscopically available distinctions get suppressed. We lose macroscopic information about the system.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
No, you can’t redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can’t use a different scale to determine macrostate than you do for entropy.
Of course, to discuss a system not in equilibrium, you need to use formulas that apply to systems that aren’t in equilibrium. The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
We still seem to be talking past each other. Neither of these is an accurate description of what I’m doing. In fact, I’m not even sure what you mean here. I still suspect you haven’t understood what I mean when I talk about a partition of phase space. Maybe you could clarify how you’re interpreting the concept?
Yes, I recognize this. None of what I said about my example relies on the process being quasistatic. Of course, if the system isn’t in equilibrium, it’s entropy isn’t directly measurable as the volume of the corresponding macroregion, but it is the Shannon entropy of a probability distribution that only has support within the macroregion (ie. it vanishes outside the macroregion). The difference from equilibrium is that the distribution won’t be uniform within the relevant macroregion. It is still the case, though, that a distribution spread out over a much larger macroregion will in general have a higher entropy than one spread out over a smaller volume, so using volume in phase space as a proxy for entropy still works.
Fair enough. My use of the word “closed” was sloppy. Don’t see how this affects the point though.
Now you’ve put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water. One response is that they have virtually identical entropy. That’s also the correct answer, since the isolated system of the container of water reaches a maximum entropy when temperature is equalized and the ink fully diffuse. The ink does not spontaneously concentrate back into a drop, despite the very small drop in entropy.
How so? Again, I really suspect that you are misunderstanding my position, because various commitments you attribute to me do not look at all familiar. I can’t isolate the source of the misunderstanding (if one exists) unless you give me a clear account of what you take me to be saying.
This is where you tried to define the entropy of diffuse ink to be lower. The highest entropy phase state is the one in which the constraints on each variable are least restrictive. That means that the state where each ink can be in any position within the glass is (other things being equal) higher entropy than a state where each ink is constrained to be in a small area.
Entropy is a physical property similar to temperature, in that at a certain level it becomes momentum. If you view a closed Carnot cycle, you will note that the source loses heat, and the sink gains heat, and that the source must be hotter than the sink. There being no method by which the coldest sink can be made colder, nor by which the total energy can be increased, the gap can only decrease.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.
It seems likely to me that the the laws of motion governing the time evolution of microstates has something to do with determining the “right” macroproperties—that is, the ones that lead to reproducible states and processes on the macro scale. (Something to do with coarse-graining, maybe?) Then natural selection filters for organisms that take advantage of these macro regularities.
maybe you’re thinking of partitions of actual space? He’s talking about partitions of phase space.
Isn’t that exactly what hidden-variable theories try to do? There have been a lot of people dissatisfied with the probabilistic nature of quantum mechanics, and have sought something more fundamental to explain the probabilities.
Hidden variable theories are not an attempt to ground the lawfulness of quantum mechanics. The Schrodinger equation isn’t reduced to something deeper in Bohmian mechanics. It appears as a basic unexplained law in the theory, just as it does in orthodox interpretations of QM. The motivation behind hidden variable theories is to repair purported conceptual defects in standard presentations of QM, not to account for the existence of the laws of QM.
I do think my claim is wrong, though. People do ask what grounds quantum field theory. In fact, that’s a pretty common question. But that’s mainly because people now realize that our QFTs are only effective theories, valid above a certain length scale. So the question is motivated by pretty much the same sort of reductionist viewpoint that leads people to question how the lawfulness of the Second Law is grounded.