The Second Law includes the definition of the partitions to which it applies- it specifically allows ‘local’ reductions in entropy, but for any partition which exhibits a local decrease in entropy, the complementary partition exhibits a greater total increase in entropy.
If you construct your partition creatively, consider the complementary partition which you are also constructing?
I think we’re using the word “partition” in two different senses. When I talk about a partition of phase space, I’m referring to this notion. I’m not sure exactly what you’re referring to.
The partition isn’t over Newtonian space, it’s over phase space, a space where every point represents an entire dynamical state of the system. If there are N particles in the system, and the particles have no internal degrees of freedom, phase space will have 6N dimensions, 3N for position and 3N for momentum. A partition over phase space is a division of the space into mutually exclusive sub-regions that collectively exhaust the space. Each of these sub-regions is associated with a macrostate of the system. Basically you’re grouping together all the microscopic dynamical configurations that are macroscopically indistinguishable.
Now, describe a state in which entropy of an isolated system will decrease over some time period. Calculate entropy at the same level of abstraction as you are describing the system; (if you describe temperature as temperature, use temperature. If you describe energy states of electrons and velocities of particles, use those instead of temperature calculate entropy.
When I checked post-Newtonian physics last, I didn’t see the laws of thermodynamics included. Clearly some of the conservation rules don’t apply in the absence of others which have been provably violated; momentum isn’t conserved when mass isn’t conserved, for example.
The entropy of a closed system in equilibrium is given by the logarithm of the volume of the region of phase space corresponding to the system’s macrostate. So if we partition phase space differently, so that the macrostates are different, judgments about the entropy of particular microstates will change. Now, according to our ordinary partitioning of phase space, the macrostate associated with an isolated system’s initial microstate will not have a larger volume than the macrostate associated with its final volume. However, this is due to the partition, not just the system’s actual microscopic trajectory. With a different partition, the same microscopic trajectory will start in a macrostate of higher entropy and evolve to a macrostate of lower entropy.
Of course, this latter partition will not correspond nicely with any of the macroproperties (such as, say, system volume) that we work with. This is what I meant when I called it unnatural. But its unnaturalness has to do with the way we are constructed. Nature doesn’t come pre-equipped with a list of the right macroproperties.
Here’s an example: Put a drop of ink in a glass of water. The ink will gradually spread out through the water. This is a process in which entropy increases. There are many different ways the ink could initially be dropped into the water (on the right or left side of the cup, for instance), and we can distinguish between these different ways just by looking. As the ink spreads out, we are no longer able to distinguish between different spread out configurations. Even though we know that dropping the ink on the right side must lead to a microscopic spread out configuration different from the one we would obtain by dropping the ink on the left side, these configurations are not macroscopically distinguishable once the ink has spread out enough. They both just look like ink uniformly spread throughout the water. This is characteristic of entropy increase: macroscopically available distinctions get suppressed. We lose macroscopic information about the system.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
No, you can’t redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can’t use a different scale to determine macrostate than you do for entropy.
Of course, to discuss a system not in equilibrium, you need to use formulas that apply to systems that aren’t in equilibrium. The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
No, you can’t redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can’t use a different scale to determine macrostate than you do for entropy.
We still seem to be talking past each other. Neither of these is an accurate description of what I’m doing. In fact, I’m not even sure what you mean here. I still suspect you haven’t understood what I mean when I talk about a partition of phase space. Maybe you could clarify how you’re interpreting the concept?
The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
Yes, I recognize this. None of what I said about my example relies on the process being quasistatic. Of course, if the system isn’t in equilibrium, it’s entropy isn’t directly measurable as the volume of the corresponding macroregion, but it is the Shannon entropy of a probability distribution that only has support within the macroregion (ie. it vanishes outside the macroregion). The difference from equilibrium is that the distribution won’t be uniform within the relevant macroregion. It is still the case, though, that a distribution spread out over a much larger macroregion will in general have a higher entropy than one spread out over a smaller volume, so using volume in phase space as a proxy for entropy still works.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
Fair enough. My use of the word “closed” was sloppy. Don’t see how this affects the point though.
Now you’ve put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water. One response is that they have virtually identical entropy. That’s also the correct answer, since the isolated system of the container of water reaches a maximum entropy when temperature is equalized and the ink fully diffuse. The ink does not spontaneously concentrate back into a drop, despite the very small drop in entropy.
Now you’ve put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water.
How so? Again, I really suspect that you are misunderstanding my position, because various commitments you attribute to me do not look at all familiar. I can’t isolate the source of the misunderstanding (if one exists) unless you give me a clear account of what you take me to be saying.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
This is where you tried to define the entropy of diffuse ink to be lower.
The highest entropy phase state is the one in which the constraints on each variable are least restrictive. That means that the state where each ink can be in any position within the glass is (other things being equal) higher entropy than a state where each ink is constrained to be in a small area.
Entropy is a physical property similar to temperature, in that at a certain level it becomes momentum. If you view a closed Carnot cycle, you will note that the source loses heat, and the sink gains heat, and that the source must be hotter than the sink. There being no method by which the coldest sink can be made colder, nor by which the total energy can be increased, the gap can only decrease.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.
But its unnaturalness has to do with the way we are constructed. Nature doesn’t come pre-equipped with a list of the right macroproperties.
It seems likely to me that the the laws of motion governing the time evolution of microstates has something to do with determining the “right” macroproperties—that is, the ones that lead to reproducible states and processes on the macro scale. (Something to do with coarse-graining, maybe?) Then natural selection filters for organisms that take advantage of these macro regularities.
The Second Law includes the definition of the partitions to which it applies- it specifically allows ‘local’ reductions in entropy, but for any partition which exhibits a local decrease in entropy, the complementary partition exhibits a greater total increase in entropy.
If you construct your partition creatively, consider the complementary partition which you are also constructing?
I think we’re using the word “partition” in two different senses. When I talk about a partition of phase space, I’m referring to this notion. I’m not sure exactly what you’re referring to.
How can that be implemented to apply to Newtonian space?
The partition isn’t over Newtonian space, it’s over phase space, a space where every point represents an entire dynamical state of the system. If there are N particles in the system, and the particles have no internal degrees of freedom, phase space will have 6N dimensions, 3N for position and 3N for momentum. A partition over phase space is a division of the space into mutually exclusive sub-regions that collectively exhaust the space. Each of these sub-regions is associated with a macrostate of the system. Basically you’re grouping together all the microscopic dynamical configurations that are macroscopically indistinguishable.
Now, describe a state in which entropy of an isolated system will decrease over some time period. Calculate entropy at the same level of abstraction as you are describing the system; (if you describe temperature as temperature, use temperature. If you describe energy states of electrons and velocities of particles, use those instead of temperature calculate entropy.
When I checked post-Newtonian physics last, I didn’t see the laws of thermodynamics included. Clearly some of the conservation rules don’t apply in the absence of others which have been provably violated; momentum isn’t conserved when mass isn’t conserved, for example.
The entropy of a closed system in equilibrium is given by the logarithm of the volume of the region of phase space corresponding to the system’s macrostate. So if we partition phase space differently, so that the macrostates are different, judgments about the entropy of particular microstates will change. Now, according to our ordinary partitioning of phase space, the macrostate associated with an isolated system’s initial microstate will not have a larger volume than the macrostate associated with its final volume. However, this is due to the partition, not just the system’s actual microscopic trajectory. With a different partition, the same microscopic trajectory will start in a macrostate of higher entropy and evolve to a macrostate of lower entropy.
Of course, this latter partition will not correspond nicely with any of the macroproperties (such as, say, system volume) that we work with. This is what I meant when I called it unnatural. But its unnaturalness has to do with the way we are constructed. Nature doesn’t come pre-equipped with a list of the right macroproperties.
Here’s an example: Put a drop of ink in a glass of water. The ink will gradually spread out through the water. This is a process in which entropy increases. There are many different ways the ink could initially be dropped into the water (on the right or left side of the cup, for instance), and we can distinguish between these different ways just by looking. As the ink spreads out, we are no longer able to distinguish between different spread out configurations. Even though we know that dropping the ink on the right side must lead to a microscopic spread out configuration different from the one we would obtain by dropping the ink on the left side, these configurations are not macroscopically distinguishable once the ink has spread out enough. They both just look like ink uniformly spread throughout the water. This is characteristic of entropy increase: macroscopically available distinctions get suppressed. We lose macroscopic information about the system.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
No, you can’t redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can’t use a different scale to determine macrostate than you do for entropy.
Of course, to discuss a system not in equilibrium, you need to use formulas that apply to systems that aren’t in equilibrium. The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
We still seem to be talking past each other. Neither of these is an accurate description of what I’m doing. In fact, I’m not even sure what you mean here. I still suspect you haven’t understood what I mean when I talk about a partition of phase space. Maybe you could clarify how you’re interpreting the concept?
Yes, I recognize this. None of what I said about my example relies on the process being quasistatic. Of course, if the system isn’t in equilibrium, it’s entropy isn’t directly measurable as the volume of the corresponding macroregion, but it is the Shannon entropy of a probability distribution that only has support within the macroregion (ie. it vanishes outside the macroregion). The difference from equilibrium is that the distribution won’t be uniform within the relevant macroregion. It is still the case, though, that a distribution spread out over a much larger macroregion will in general have a higher entropy than one spread out over a smaller volume, so using volume in phase space as a proxy for entropy still works.
Fair enough. My use of the word “closed” was sloppy. Don’t see how this affects the point though.
Now you’ve put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water. One response is that they have virtually identical entropy. That’s also the correct answer, since the isolated system of the container of water reaches a maximum entropy when temperature is equalized and the ink fully diffuse. The ink does not spontaneously concentrate back into a drop, despite the very small drop in entropy.
How so? Again, I really suspect that you are misunderstanding my position, because various commitments you attribute to me do not look at all familiar. I can’t isolate the source of the misunderstanding (if one exists) unless you give me a clear account of what you take me to be saying.
This is where you tried to define the entropy of diffuse ink to be lower. The highest entropy phase state is the one in which the constraints on each variable are least restrictive. That means that the state where each ink can be in any position within the glass is (other things being equal) higher entropy than a state where each ink is constrained to be in a small area.
Entropy is a physical property similar to temperature, in that at a certain level it becomes momentum. If you view a closed Carnot cycle, you will note that the source loses heat, and the sink gains heat, and that the source must be hotter than the sink. There being no method by which the coldest sink can be made colder, nor by which the total energy can be increased, the gap can only decrease.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.
It seems likely to me that the the laws of motion governing the time evolution of microstates has something to do with determining the “right” macroproperties—that is, the ones that lead to reproducible states and processes on the macro scale. (Something to do with coarse-graining, maybe?) Then natural selection filters for organisms that take advantage of these macro regularities.
maybe you’re thinking of partitions of actual space? He’s talking about partitions of phase space.