In a previous thread, I brought up the subject of entropy being subjective and got a lot of interesting responses. One point of contention was that if you know the positions and velocities of all the molecules in a hot cup of tea, then its temperature is actually at absolute zero (!). I realized that the explanation of this in usual terms is a bit clumsy and awkward. I’m thinking maybe if this could be explained in terms of reversible operations on strings of bits (abstracting away from molecules and any solid physical grounding), it might be easier to precisely see why this is the case. In other words, I’m looking for a dynamical systems interpretation of this idea. I googled a bit but couldn’t find any accessible material on this. There’s a book about dynamical systems approaches to thermodynamics but it’s extremely heavy and does not seem to have been reviewed in any detail so I’m not even sure of the validity of the arguments. Anyone know of any accessible materials on ideas like this?
Isn’t all this just punning on definitions? If the particle velocities in a gas are Maxwell-Boltzmann distributed for some parameter T, we can say that the gas has “Maxwell-Boltzmann temperature T”. Then there is a separate Jaynes-style definition about “temperature” in terms of the knowledge someone has about the gas. If all you know is that the velocities follow a certain distribution, then the two definitions coincide. But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on), so rather than saying that it has zero temperature, should we not just note that the information-based definition no longer captures the ordinary notion of temperate?
You are essentially right. The point is that ‘average kinetic energy of particles’ is just a special case that happens to correspond to the Jaynes-style definition, for some types of systems. But the Jaynes-style definition is the ‘true’ definition that is valid for all systems.
But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on)
Again, as I mentioned in my previous replies, the gas will melt ice cubes, but is only in thermal equilibrium with 0 K ice cubes.
Again, as I mentioned in my previous replies, the gas will melt ice cubes, but is only in thermal equilibrium with 0 K ice cubes.
This claim seems dubious to me.
Like, the “original, naive” definition of thermal equilibrium is that two systems are out of equilibrium if, when put them in contact with each other, heat will flow from one to the other. If you have a 0K icecube one one hand and a gas and piece of RAM encoding that state of the gas on the other, then they certainly do not seem to be in equilibrium in this sense: when you remove the partitioning wall, the gas atoms will start bouncing against the cube, the ice atoms will start moving, and the energy of the ice cube atoms increases. Heat energy was transferred from one system to the other.
I am not claiming that there is some other temperature T such that an icecube at T would be in equilibrium with the system; rather, it seems the gas+RAM system is itself not in thermal equilibrium, and therefore does not have a temperature?
My more general point is that one can not just claim by fiat that the Jaynes-style definition is the “true” one; if there are multiple ones in play and they sometimes disagree, then one has see which one is more useful. Thermodynamics was originally motivated by heat energy flowing between different gases. It seems that in these (highly artificial) examples, the information-based definition no longer describes heat flow well, which would be a mark against it...
If you have a 0K icecube one one hand and a gas and piece of RAM encoding that state of the gas on the other, then they certainly do not seem to be in equilibrium in this sense:
gas+RAM is not in thermal equilibrium with the ice cube, because a large enough stick of RAM to hold this information would itself have entropy, and a lot of it (far, far larger than the information it is storing). This is actually the reason why Maxwell demons are impossible in practice—storing the information becomes a very difficult problem, and the entropy of the system becomes entirely contained within the storage medium. If the storage medium is assumed to be immaterial (an implicit assumption which we are making in this example), then the total system entropy is 0 and it’s at 0 K.
My more general point is that one can not just claim by fiat that the Jaynes-style definition is the “true” one;
It is true for the same reason that Bayesian updating is the only true method for updating beliefs; any other method is either suboptimal or inconsistent or both. In fact it is the very same reason, because the entropy of a physical system is literally the entropy of the Bayesian posterior distribution of the parameters of the system according to some model.
At least if you are talking about Physics, whether you know the position and velocity of every atom in a system or not is irrelevant to what its temperature is. The point of thermodynamics is that under a broad range of conditions, there are statistical quantities which are predictable, such as the average kinetic energy of components after “enough” time has passed (enough time to reach thermal equilibrium in a given experiment),and of course it is not only the average kinetic energy, but the distribution of energies which are known.
There may be interesting or even amusing information theoretic senses in which tracking the microscopic details can be said to have zero entropy, but these do not impact the physics of the system. IF the system is one in which the conditions for thermal equilibrium being reached are there, then we will be able to predict the same distribution of kinetic energies in the system whether or not we are tracking every single molecules velocities and positions, or not.
As to whether or not a 0 K ice cube will melt, it will melt if your put it in contact with a gas that has enough kinetic energy in it such that when that energy is divided by all the molecules in the system, the average energy per molecule is greater than k*274 K where k is boltzmann’s constant. Your detailed knowledge of every molecules position and velocity will not stop the ice cube from transitioning to its liquid state as kinetic energy from the gas is transferred into the ice cube.
NO matter how entertaining alternative definitions of entropy and temperature might be, they are completely irrelevant to the time-evolution of liquids, gases, and solids interacting under the conditions in which they are described to good accuracy by thermodynamics. There is nothing arbitrary or “in the mind’ about thermodynamics, it is a simplified map of a large range of real situations, a map whos accuracy is not affected by having additional knowledge of the terrain being mapped.
There may be interesting or even amusing information theoretic senses in which tracking the microscopic details can be said to have zero entropy, but these do not impact the physics of the system.
Yes this is the entire point; entropy seems to be disassociated from what’s “out there.”
As to whether or not a 0 K ice cube will melt, it will melt if your put it in contact with a gas that has enough kinetic energy in it such that when that energy is divided by all the molecules in the system, the average energy per molecule is greater than k*274 K where k is boltzmann’s constant.
And no one has said otherwise. But if you consider gas+information together, you can no longer consistently say it’s at anything other than 0 K.
There is nothing arbitrary or “in the mind’ about thermodynamics, it is a simplified map of a large range of real situations,
I think you’re misunderstanding what “in the mind” means. It does not mean that our thoughts can influence physics. Rather, it means that quantities like entropy and temperature depend (to you) on the physical model in which you’re viewing the system.
I think you’re misunderstanding what “in the mind” means. It does not mean that our thoughts can influence physics. Rather, it means that quantities like entropy and temperature depend (to you) on the physical model in which you’re viewing the system.
I don’t think I am misunderstanding anything. But it is possible that i am merely not misunderstanding the physics, I suppose. But I participated in the other threads and I am pretty sure I know what we are talking about.
To the extent that you want to define something that allows you to characterize a boiling pot of water as having either zero entropy or zero temperature, define away. I will “merely” point out that the words entropy and temperature have already been applied to that situation by others who have come before you and in a way which is not altered by any knowledge you may have beyond the extensive quantities of the boiling pot of water.
I will point out that your quantities of “entropy” and “temperature” break the laws of thermodynamics in probably every respect. In your system, energy can flow from a colder object to a hotter object. In your system, entropy can decrease in a closed system. In summary, not only are your definitions of entropy and temperature confusing a rather difficult but unconfused subject, but they are also violating all the relationships that people versed in thermodynamics carry around about entropy and temperature.
So what is the possible point of calling your newly defined quantities entropy and temperature? It seems to me the only point is to piggyback your relatively useless concepts on the well-deserved reputation of entropy and temperature in order to get them an attention they do not deserve.
No matter how much information I have about a pot of boiling water, it is still capable of turning a turbine with its steam, cooking rice, and melting ice cubes. If you redefine temperature so that the boiling water is at 0 K but still melting ice cubes by transferring energy to the ice even though the ice is at a much hotter 250 K, then I sure wish you would call this thing that has nothing to do with average kinetic energy and which direction energy will flow something else.
To the extent that you want to define something that allows you to characterize a boiling pot of water as having either zero entropy or zero temperature, define away.
It’s not an arbitrary definition made for fun. It is—as I’ve pointed out—the only definition that is consistent. Any other set of definitions will lead to ‘paradoxes’, like Maxwell’s demon or various other ‘violations’ of the 2nd law.
I will point out that your quantities of “entropy” and “temperature” break the laws of thermodynamics in probably every respect.
On the contrary, they are the only consistent way of looking at thermodynamics.
In your system, energy can flow from a colder object to a hotter object.
And why not? Every time a battery powers an (incandescent) flashlight, energy is flowing from a colder object to a hotter object.
It seems to me the only point is to piggyback your relatively useless concepts on the well-deserved reputation of entropy and temperature in order to get them an attention they do not deserve.
The point is to put thermodynamics on a rigorous and general footing. That’s why Jaynes and others proposed MaxEnt thermodynamics.
No matter how much information I have about a pot of boiling water, it is still capable of turning a turbine with its steam, cooking rice, and melting ice cubes
These things you speak of are due to the energy in the boiling water, not the temperature, and energy is not changed no matter how much you know about the system. A system at 0 K can still carry energy. There is nothing in the laws of physics that prevents this.
And why not? Every time a battery powers an (incandescent) flashlight, energy is flowing from a colder object to a hotter object.
Actually, no. The temperature of the electrons moving in the current is quite high. At least according to the uncontroversial definitions generally used. These electrons have a lot of kinetic energy.
A system at 0 K can still carry energy. There is nothing in the laws of physics that prevents this.
Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can’t do work on any system, whether the other system is at 0 K also, or not.
Do you have in mind that a motor could be cooled down to 0 K and then run, or that a battery could be cooled down to 0 K and then run? It could be that parts of a battery or motor are at 0 K, perhaps the metal rods or cylinders of a motor are at 0 K, but the motor still turns to produce energy. But the motor itself is not at 0 K, it has motion, kinetic energy, which can be lower by its stopping running.
By the way, do you have any links to anything substantial that puts the temperature of microscopically known boiling water at 0 K? So far I’ve been contradicting your assertions without seeing the details that might lie behind them.
The temperature of the electrons moving in the current is quite high. At least according to the uncontroversial definitions generally used.
I have to say, that definition is quite new to me. The electron temperature in a piece of copper is pretty much the same as the rest of the copper, even when it’s carrying many amps of current.
But to give an even more straightforward example, think of a cold flywheel turning a hot flywheel. I suppose you’re going to say that the cold flywheel is ‘hot’ because it’s turning. I’m sorry but that’s not how thermodynamics works.
Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can’t do work on any system, whether the other system is at 0 K also, or not.
What is the exact law that says this? I’d really like to see it. The thermodynamics you’re talking about seems drastically different from the thermodynamics I learned in school.
But the motor itself is not at 0 K, it has motion, kinetic energy, which can be lower by its stopping running.
Forget a motor, just imagine an object at 0 K moving linearly through outer space.
By the way, do you have any links to anything substantial that puts the temperature of microscopically known boiling water at 0 K?
EY gives plenty of references in his linked sequences on this.
But to give an even more straightforward example, think of a cold flywheel turning a hot flywheel. I suppose you’re going to say that the cold flywheel is ‘hot’ because it’s turning. I’m sorry but that’s not how thermodynamics works.
The equipartition theorem says that a system in thermal equilibrium has energy k*T/2 per degree of freedom. Consider a rigid flywheel weighing 1 kg and spinning at “around” 1 m/s so that its kinetic energy from it’s rotation is 1 J. I’d like to say this system has 1 degree of freedom, spinning of the flywheel, and so its temperature is 1/k = 7e22 K. But in case you point out that the flywheel can be flying through space as well as spinning on any one of three axes, lets say its temperature is 7e22/6 = about 1e22 K.
A macroscopic rigid system has massively more weight than molecules in a gas but not very many degrees of freedom. If temperatures can be assigned to these at all, they are MASSIVE temperatures.
But it is not a rigid body you say, it is a solid made of atoms that can vibrate. Indeed the solid flywheel might be made of a piece of metal which is at 200 K or 300 K or whatever temperature you want to have heated it up to. But an experiment with a flywheel made of metal at 300 K which flywheel is being spun and unspun: the energy of the spinning is not “thermalizing” with the internal vibrational energy of the flywheel. It is not thermalizing which means these are thermodynamically uncoupled systems which means the effective temperature of the macroscopic rotation of the flywheel is in the 1e22 kind of range.
This IS how thermodynamics works. We don’t usually talk about thermo of macroscopic objects with very few degrees of freedom. That doesn’t mean we can’t, or even that we shouldn’t.
Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can’t do work on any system, whether the other system is at 0 K also, or not.
What is the exact law that says this? I’d really like to see it. The thermodynamics you’re talking about seems drastically different from the thermodynamics I learned in school.
Forget a motor, just imagine an object at 0 K moving linearly through outer space.
OK. As with the flywheel, a 1 kg object moving at 1 m/s through space has 1 J of kinetic energy. Even if we attribute 6 degrees of freedom to this object, that kinetic energy corresponds to about 1e22 K.
EY gives plenty of references in his linked sequences on this.
I looked through this thread and there are no links to any sequences. I searched the Wiki for “Jaynes” and there were very few references, only to mind projection fallacy. So if in fact there is any link anywhere to an argument that a pot of water with microscopically known positions and velocities is somehow at 0 K, please just point me to it.
Let me see if I can pick apart your misconceptions.
About the flywheel example, no, rotation does not lead to temperature, because the rotational energy of the flywheel is not thermal energy. You seem to be mixing up thermal with non-thermal energy. In thermodynamics we assign several different kinds of energy to a system:
Total energy: Internal energy + Potential energy + Kinetic energy
Potential energy: Energy due to external force fields (gravity, electromagnetism, etc.)
Kinetic energy: Energy due to motion of the system as a whole (linear motion, rotational motion, etc.)
Internal energy/thermal energy: The energy that is responsible for the temperature of a system.
But here’s the kicker: The division between these concepts is not a fundamental law of nature, but depends on your model. So yes, you could build a model where rotation is included in thermal energy. But then, rotation would be part of the entropy as well, so at nonzero temperature you could not model it as rotating at a fixed speed! You’d have to model the rotation as a random variable. Clearly this contradicts with rotation at a fixed speed. That is, unless you also set the temperature to 0 K, in which entropy would be zero and so you could set the rotation to a fixed speed.
Now about the relationship between internal energy and degrees of freedom. You’re misunderstanding what a degree of freedom is. The equipartition theorem says that the average energy of a particle with n degrees of freedom is nkT/2, but even if you included rotational energy as thermal energy, a large spinning object has much more than one degree of freedom. It has degrees of freedom associated with its many vibrational modes. It has so many vibrational modes that the associated ‘temperature’ is actually very low, not high as you describe. Indeed, if it were to ‘thermalize’ (say, through friction), it would not warm up the object that much. If it were true that the temperature due to rotation is 1e22, then if you let it thermalize it would violate conservation of energy, by tens of orders of magnitude (it would turn into quark-gluon plasma and explode violently, vaporizing half of the planet Earth).
And finally, you cannot calculate absolute energy for an object moving linearly through space. The kinetic energy depends on the rest frame.
Let me see if I can pick apart your misconceptions.
Ok, I have a PhD in Applied Physics. I have learned thermo and statistical mechanics a few times including two graduate level courses. I have recently been analyzing internal and external combustion engines as part of my job, and have relearned some parts of thermo for that. It may be that despite my background, I have not done a good job of explaining what is going on with thermo. But what I am explaining here is, at worst, the way a working physicist would see thermo, informed by a science that explains a shitload of reality, and in a way which is no more subjective than the “spooky action at a distance” of electromagnetic and gravitational fields. I realize appealing to my credentials is hardly an argument. However, I am pretty sure that I am right and I am pretty sure that what I have been claiming are all within spitting distance of discussions and examples of thermo and stat mech calculations and considerations that we really talked about when I was learning this stuff.
My confidence in my position is not undermined by anything you have said, so far. I have asked you for a link to something with some kind of detail that explicates the 0 K 0 entropy boiling water position, or some version of the broken concepts you are speaking generally about. You have referred only to things already linked in the thread, or in the sequence on this topic, and i have found no links in the thread that were relevant. I have asked you again to link me to something and you haven’t.
But despite your not giving me anything to work with from your side, I have believed I understand what you are claiming. For the entropy side I would characterize it this way. Standard entropy makes a list of all states at the appropriate energy of an isolated system and say there is equal probability of the system being in any of these. And so the entropy at this energy of this isolated system is log(N(E)) where N(E) is the number of states that have energy E.
I think what you are saying is that if you have detailed knowledge of which state the system is in now, then with the details you have you can predict the exact trajectory of the system through state space, and so the number of states the system can be in is 1 because you KNOW which one it must be in. And so its entropy is 0.
A version of my response would be: so you know which state the system is at any instant of time and so you feel like the entropy is log(1) at any instant in time. But the system still evolves through time through all the enumerated states. And its entropy is log(N(E)), the count of states it evolves through, and it is unchanged that you know at each instant which state it is in. So I know the details of every collision because I follow the motions in detail, but every collision results in the system changing states as every collision changes the direction and speed of two molecules in the system, and over some short time, call it a thermalization time, the system explores nearly all N(E) states. So despite our superior knowledge that gives us the time-sequence of how the system changes from state to state and when, it still explores N(E) states, and its properties of melting ice or pushing pistons is still predictable purely from knowledge of N(E), and is not helped or hurt by a detailed knowledge of the time evolution of the system, the details of how it goes about xploring all N(E) states.
I have just reread this article on Maxwell’s Demons. I note that at no point do they deviate from the classic definitions of temperature and entropy. And indeed, the message seems to be that once the demon is part of the system, the system grows classical entropy exactly as predicted, the demons themselves are engines producing the entropy increases needed to balance all equations.
Now about the relationship between internal energy and degrees of freedom. You’re misunderstanding what a degree of freedom is.
I said rotation or movement of a rigid body. By definition a rigid body doesn’t have modes of vibration in it. Of course you may think that all real bodies are not truly rigid as they are made out of molecules. But if the macroscopic motion is only weakly coupled to the vibrational modes of the material it is made of, then this is essentially saying the macroscopic and vibrational systems are insulated from each other, and so maintain there own internal temperatures which can be different from each other. Just as two gases separated by a heat-insulating wall can be at different temperatures, a feature we find often used in thermodynamic calculations.
And finally, you cannot calculate absolute energy for an object moving linearly through space. The kinetic energy depends on the rest frame.
You actually asked me to “Forget a motor, just imagine an object at 0 K moving linearly through outer space.” And so I used the example you asked me to use.
Credentials aren’t very relevant here, but if we’re going to talk about them, I have a PhD in engineering and a BS in math (minor in physics).
and in a way which is no more subjective than the “spooky action at a distance” of electromagnetic and gravitational fields.
Again, as I’ve pointed out at least once before, entropy is not subjective. Being dependent on model and information does not mean it is subjective.
And so the entropy at this energy of this isolated system is log(N(E)) where N(E) is the number of states that have energy E.
Right off the bat, this is wrong. In a continuous system the state space could be continuous (uncountably infinite) and so N(E) makes no sense. “Logarithm of the number of states of the system” is just a loose way of describing what entropy is, not a precise way.
and so the number of states the system can be in is 1 because you KNOW which one it must be in. And so its entropy is 0.
The number of states a system can be in is always 1! A system (a classical system, at least) can never be in more than one state at a time. The ‘number of states’, insofar as it is loosely used, means the size of the state space according to our model and our information about the system.
And its entropy is log(N(E)), the count of states it evolves through, and it is unchanged that you know at each instant which state it is in.
There are several things wrong with this. First of all, it assumes the ergodic hypothesis (time average = space average) and the ergodic hypothesis is not required for thermodynamics to work (although it does make a lot of physical systems easier to analyze). But it also has another problem in that it makes entropy dependent on time scale. That is, choosing a fine time scale would decrease entropy. This is not how entropy works. And at any rate, it’s not what entropy measures anyway.
I said rotation or movement of a rigid body. By definition a rigid body doesn’t have modes of vibration in it.
But I’m not assuming a rigid body. You are. There is no reason to assume a rigid body. I offered an example of a cold flywheel turning a hot flywheel, as a system where energy moves from a cold object to a hot object. You decided for some reason that the flywheels must be rigid bodies. They aren’t, at least not in my example.
Right off the bat, this is wrong. In a continuous system the state space could be continuous (uncountably infinite) and so N(E) makes no sense. “Logarithm of the number of states of the system” is just a loose way of describing what entropy is, not a precise way.
A finite system at finite energy has a finite number of states in quantum. So if we restrict ourselves only to any kind of situation which could ever be realized by human investigators in our universe, conclusions reached using discrete states are valid.
There are several things wrong with this. First of all, it assumes the ergodic hypothesis
No, I am considering all possible states N(E) of the system at energy E. Many of these states will be highly spatially anisotropic, and I am still including them in the count.
But it also has another problem in that it makes entropy dependent on time scale. That is, choosing a fine time scale would decrease entropy. This is not how entropy works. And at any rate, it’s not what entropy measures anyway.
Since you won’t show me in any detail the calculation that leads to water having 0 temperature or 0 energy if you have special knowledge of it, I can only work from my guesses about what you are talking about. And my guess is that you achieve low entropy, 0 entropy, because with sufficient special knowledge you reduce the number of possible states to 1 at any instant, the state that the system is actually in at that instant. But if you count the number of states the system has been in as time goes by, ever time two things collide and change velocity you bounce to another state, and so even with perfect knowledge of the time evolution over a long enough time, you still cover all possible N(E) states. But over an insufficiently long time you cover a smaller number of states. In fact, the behavior of states looked at on time-scales too short to get “thermalization,” that is too short to allow the system to change through a significant fraction of the available states might possibly be describably with an entropy that depended on time, but the last thing I want to do is define new things and call them entropy when they do not have the properties of the classic entropy I have been advocating for through this entire thread.
You decided for some reason that the flywheels must be rigid bodies. They aren’t, at least not in my example.
Given the length of this thread, I think it would be better if you read all the sentences in each paragraph rather than responding to one out of context.
Seriously, can’t you give me an example of your 0 K 0 entropy boiling water and tell me what you hope to know from this example that we don’t know already? We have probably gotten most of what we can get from an open-ended discussion of philosophy of thermodynamics. A real example from you would certainly restrict the field of discussion, possibly to something even worth doing. Who knows, I might look at what you have and agree with your conclusions.
Also, I still don’t buy the claim about the temperature. You said in the linked comment that putting a known-microstate cup of tea in contact with an unknown-microstate cup of tea wouldn’t really be thermal equilibrium because it would be “not using all the information at your disposal. And if you don’t use the information it’s as if you didn’t have it.”
If I know the exact state of a cup of tea, and am able to predict how that state will evolve in the future, the cup of tea has zero entropy.
Then suppose I take a glass of water that is Boltzmann-distributed. It has some spread over possible microstates—the bigger the spread, the higher entropy (And also temperature, for Boltzmann-distributed things).
Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea). The combined sytem is no longer Boltzmann in either subsytem, and has the same entropy as the original glass of water, just moved into different microstates.
Note that it didn’t matter what the water’s temperature was—all that mattered was that the tea’s distribution had zero entropy. The fact that there has been no increase in entropy is the proof that all the information has been used. If the water had the same average energy as the tea, so that no macroscopic amount of energy was exchanged, then these thing would be in thermal equilibrium by your standards.
Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea).
After you put the glass of water in contact with the cup of tea, you will quickly become uncertain about the state of the tea. In order to still know the microstate, you need to be fed more information.
If you have a Boltzmann distribution, you still know all the microstates—you just have a probability distribution over them. Time evolution in contact with a zero-entropy object moves probability from one microstate to another in a predictable way, with neither compression nor spreading of the probability distribution.
Sure, this requires obscene amounts of processing power to keep track of, but not particularly more than it took to play Maxwell’s demon with a known cup of tea.
Firstly, even if you actually had a block of ice at 0 K and put it in thermal contact with a warm glass of water, the total system entropy would increase over time. It is completely false that the number of initial and final microstates are the same. Entropy depends on volume as well as temperature. (To see why this is the case, consider that you’re dealing with a continuous phase space, not a discrete one).
Additionally, your example doesn’t apply to what I’m talking about, because nowhere are you using the information about the cup of tea. Again, as I said, if you don’t use the information it’s as if you didn’t have it.
I am fully aware that saying it in this way is clumsy and hard to understand (and not 100% convincing, even though it really is true). That’s why I’m looking for a more abstract, theoretical way of saying it.
I’m not really sure why you say volume is changing here.
I don’t understand how you want information to be used, if not to calculate a final distribution over microstates, or what you think “losing information” is if not an increase in entropy. If we’re having some sort of disconnect I’d be happy to talk more, but if you’re trolling me I would like to not be trolled.
I’m not really sure why you say volume is changing here.
Think about putting a packet of gas next to a vacuum and allowing it to expand. In this case it’s even easier to see that the requirements of your thought experiment hold—you know the exact state of the vacuum, because it has no microstates. Yet the total system entropy will still increase as the molecules of gas expand to fill the vacuum. Even if you have perfect information about the gas at the beginning (zero entropy), at the end of the experiment you will not. You will have some uncertainty. This is because the phase space itself has expanded.
If we’re having some sort of disconnect I’d be happy to talk more,
I think we are. I suggest becoming familiar with R Landauer and C H Bennet’s work. I’d be happy to discuss this further if we are on the same page.
Think about putting a packet of gas next to a vacuum and allowing it to expand. In this case it’s even easier to see that the requirements of your thought experiment hold
Oh, I see, you’re thinking of particle exchange, like if one dumped the water into the tea. This case is not what I intended—by thermal contact I just mean exchange of energy.
With identical particles, the case with particle exchange gets complicated. There might even be some interesting physics there.
The thermodynamics of energy exchange and mass exchange are actually similar. You still get the increase in entropy, even if you are just exchanging energy.
One the one hand, this is a good point that points out a weakness in my argument—if states are continuous rather than discrete, one can increase or decrease entropy even with deterministic time-evolution by spreading out or squeezing probability mass.
But I don’t know how far outside the microcanonical this analogy you’re making holds. Exchanging energy definitely works like exchanging particles when all you know is the total energy, but there’s no entropy increase when both are in a single microstate, or when both have the same Boltzmann distribution (hm, or is there?).
The lesson is that statistical methods are superfluous if you know everything with certainty. It’s worth noting that classical mechanics is completely symmetric with respect to time (does not have a distinguished “arrow of time”), whereas thermodynamics has a definite arrow of time. You run into problems if you assume that everything behaves classically and try to apply thermodynamic notions.
Landau and Lifshitz’s Statistical Physics has some discussion of issues with entropy.
I understand what you’re saying and I agree. Though it’s worth mentioning that the ‘arrow of time’ in thermodynamics actually doesn’t exist for closed, reversible systems.
I’m pretty sure Manfred is right. You drop a block of ice of unknown configuration into a cup of tea of known configuration, then your uncertainty about the system will grow over time. Of course entropy != temperature. You coudl say that the tea has zero entropy, but not zero temperature.
The block of ice is not of unknown configuration. The block of ice in my example is at 0 K, which means it has zero entropy (all molecules rigidly locked in a regular periodic lattice) and thus its configuration is completely known.
In a previous thread, I brought up the subject of entropy being subjective and got a lot of interesting responses. One point of contention was that if you know the positions and velocities of all the molecules in a hot cup of tea, then its temperature is actually at absolute zero (!). I realized that the explanation of this in usual terms is a bit clumsy and awkward. I’m thinking maybe if this could be explained in terms of reversible operations on strings of bits (abstracting away from molecules and any solid physical grounding), it might be easier to precisely see why this is the case. In other words, I’m looking for a dynamical systems interpretation of this idea. I googled a bit but couldn’t find any accessible material on this. There’s a book about dynamical systems approaches to thermodynamics but it’s extremely heavy and does not seem to have been reviewed in any detail so I’m not even sure of the validity of the arguments. Anyone know of any accessible materials on ideas like this?
Isn’t all this just punning on definitions? If the particle velocities in a gas are Maxwell-Boltzmann distributed for some parameter T, we can say that the gas has “Maxwell-Boltzmann temperature T”. Then there is a separate Jaynes-style definition about “temperature” in terms of the knowledge someone has about the gas. If all you know is that the velocities follow a certain distribution, then the two definitions coincide. But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on), so rather than saying that it has zero temperature, should we not just note that the information-based definition no longer captures the ordinary notion of temperate?
You are essentially right. The point is that ‘average kinetic energy of particles’ is just a special case that happens to correspond to the Jaynes-style definition, for some types of systems. But the Jaynes-style definition is the ‘true’ definition that is valid for all systems.
Again, as I mentioned in my previous replies, the gas will melt ice cubes, but is only in thermal equilibrium with 0 K ice cubes.
This claim seems dubious to me.
Like, the “original, naive” definition of thermal equilibrium is that two systems are out of equilibrium if, when put them in contact with each other, heat will flow from one to the other. If you have a 0K icecube one one hand and a gas and piece of RAM encoding that state of the gas on the other, then they certainly do not seem to be in equilibrium in this sense: when you remove the partitioning wall, the gas atoms will start bouncing against the cube, the ice atoms will start moving, and the energy of the ice cube atoms increases. Heat energy was transferred from one system to the other.
I am not claiming that there is some other temperature T such that an icecube at T would be in equilibrium with the system; rather, it seems the gas+RAM system is itself not in thermal equilibrium, and therefore does not have a temperature?
My more general point is that one can not just claim by fiat that the Jaynes-style definition is the “true” one; if there are multiple ones in play and they sometimes disagree, then one has see which one is more useful. Thermodynamics was originally motivated by heat energy flowing between different gases. It seems that in these (highly artificial) examples, the information-based definition no longer describes heat flow well, which would be a mark against it...
gas+RAM is not in thermal equilibrium with the ice cube, because a large enough stick of RAM to hold this information would itself have entropy, and a lot of it (far, far larger than the information it is storing). This is actually the reason why Maxwell demons are impossible in practice—storing the information becomes a very difficult problem, and the entropy of the system becomes entirely contained within the storage medium. If the storage medium is assumed to be immaterial (an implicit assumption which we are making in this example), then the total system entropy is 0 and it’s at 0 K.
It is true for the same reason that Bayesian updating is the only true method for updating beliefs; any other method is either suboptimal or inconsistent or both. In fact it is the very same reason, because the entropy of a physical system is literally the entropy of the Bayesian posterior distribution of the parameters of the system according to some model.
At least if you are talking about Physics, whether you know the position and velocity of every atom in a system or not is irrelevant to what its temperature is. The point of thermodynamics is that under a broad range of conditions, there are statistical quantities which are predictable, such as the average kinetic energy of components after “enough” time has passed (enough time to reach thermal equilibrium in a given experiment),and of course it is not only the average kinetic energy, but the distribution of energies which are known.
There may be interesting or even amusing information theoretic senses in which tracking the microscopic details can be said to have zero entropy, but these do not impact the physics of the system. IF the system is one in which the conditions for thermal equilibrium being reached are there, then we will be able to predict the same distribution of kinetic energies in the system whether or not we are tracking every single molecules velocities and positions, or not.
As to whether or not a 0 K ice cube will melt, it will melt if your put it in contact with a gas that has enough kinetic energy in it such that when that energy is divided by all the molecules in the system, the average energy per molecule is greater than k*274 K where k is boltzmann’s constant. Your detailed knowledge of every molecules position and velocity will not stop the ice cube from transitioning to its liquid state as kinetic energy from the gas is transferred into the ice cube.
NO matter how entertaining alternative definitions of entropy and temperature might be, they are completely irrelevant to the time-evolution of liquids, gases, and solids interacting under the conditions in which they are described to good accuracy by thermodynamics. There is nothing arbitrary or “in the mind’ about thermodynamics, it is a simplified map of a large range of real situations, a map whos accuracy is not affected by having additional knowledge of the terrain being mapped.
Yes this is the entire point; entropy seems to be disassociated from what’s “out there.”
And no one has said otherwise. But if you consider gas+information together, you can no longer consistently say it’s at anything other than 0 K.
I think you’re misunderstanding what “in the mind” means. It does not mean that our thoughts can influence physics. Rather, it means that quantities like entropy and temperature depend (to you) on the physical model in which you’re viewing the system.
I don’t think I am misunderstanding anything. But it is possible that i am merely not misunderstanding the physics, I suppose. But I participated in the other threads and I am pretty sure I know what we are talking about.
To the extent that you want to define something that allows you to characterize a boiling pot of water as having either zero entropy or zero temperature, define away. I will “merely” point out that the words entropy and temperature have already been applied to that situation by others who have come before you and in a way which is not altered by any knowledge you may have beyond the extensive quantities of the boiling pot of water.
I will point out that your quantities of “entropy” and “temperature” break the laws of thermodynamics in probably every respect. In your system, energy can flow from a colder object to a hotter object. In your system, entropy can decrease in a closed system. In summary, not only are your definitions of entropy and temperature confusing a rather difficult but unconfused subject, but they are also violating all the relationships that people versed in thermodynamics carry around about entropy and temperature.
So what is the possible point of calling your newly defined quantities entropy and temperature? It seems to me the only point is to piggyback your relatively useless concepts on the well-deserved reputation of entropy and temperature in order to get them an attention they do not deserve.
No matter how much information I have about a pot of boiling water, it is still capable of turning a turbine with its steam, cooking rice, and melting ice cubes. If you redefine temperature so that the boiling water is at 0 K but still melting ice cubes by transferring energy to the ice even though the ice is at a much hotter 250 K, then I sure wish you would call this thing that has nothing to do with average kinetic energy and which direction energy will flow something else.
It’s not an arbitrary definition made for fun. It is—as I’ve pointed out—the only definition that is consistent. Any other set of definitions will lead to ‘paradoxes’, like Maxwell’s demon or various other ‘violations’ of the 2nd law.
On the contrary, they are the only consistent way of looking at thermodynamics.
And why not? Every time a battery powers an (incandescent) flashlight, energy is flowing from a colder object to a hotter object.
The point is to put thermodynamics on a rigorous and general footing. That’s why Jaynes and others proposed MaxEnt thermodynamics.
These things you speak of are due to the energy in the boiling water, not the temperature, and energy is not changed no matter how much you know about the system. A system at 0 K can still carry energy. There is nothing in the laws of physics that prevents this.
Actually, no. The temperature of the electrons moving in the current is quite high. At least according to the uncontroversial definitions generally used. These electrons have a lot of kinetic energy.
Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can’t do work on any system, whether the other system is at 0 K also, or not.
Do you have in mind that a motor could be cooled down to 0 K and then run, or that a battery could be cooled down to 0 K and then run? It could be that parts of a battery or motor are at 0 K, perhaps the metal rods or cylinders of a motor are at 0 K, but the motor still turns to produce energy. But the motor itself is not at 0 K, it has motion, kinetic energy, which can be lower by its stopping running.
By the way, do you have any links to anything substantial that puts the temperature of microscopically known boiling water at 0 K? So far I’ve been contradicting your assertions without seeing the details that might lie behind them.
I have to say, that definition is quite new to me. The electron temperature in a piece of copper is pretty much the same as the rest of the copper, even when it’s carrying many amps of current.
But to give an even more straightforward example, think of a cold flywheel turning a hot flywheel. I suppose you’re going to say that the cold flywheel is ‘hot’ because it’s turning. I’m sorry but that’s not how thermodynamics works.
What is the exact law that says this? I’d really like to see it. The thermodynamics you’re talking about seems drastically different from the thermodynamics I learned in school.
Forget a motor, just imagine an object at 0 K moving linearly through outer space.
EY gives plenty of references in his linked sequences on this.
The equipartition theorem says that a system in thermal equilibrium has energy k*T/2 per degree of freedom. Consider a rigid flywheel weighing 1 kg and spinning at “around” 1 m/s so that its kinetic energy from it’s rotation is 1 J. I’d like to say this system has 1 degree of freedom, spinning of the flywheel, and so its temperature is 1/k = 7e22 K. But in case you point out that the flywheel can be flying through space as well as spinning on any one of three axes, lets say its temperature is 7e22/6 = about 1e22 K.
A macroscopic rigid system has massively more weight than molecules in a gas but not very many degrees of freedom. If temperatures can be assigned to these at all, they are MASSIVE temperatures.
But it is not a rigid body you say, it is a solid made of atoms that can vibrate. Indeed the solid flywheel might be made of a piece of metal which is at 200 K or 300 K or whatever temperature you want to have heated it up to. But an experiment with a flywheel made of metal at 300 K which flywheel is being spun and unspun: the energy of the spinning is not “thermalizing” with the internal vibrational energy of the flywheel. It is not thermalizing which means these are thermodynamically uncoupled systems which means the effective temperature of the macroscopic rotation of the flywheel is in the 1e22 kind of range.
This IS how thermodynamics works. We don’t usually talk about thermo of macroscopic objects with very few degrees of freedom. That doesn’t mean we can’t, or even that we shouldn’t.
See for example http://physics.about.com/od/glossary/g/absolutezero.htm “Absolute zero is the lowest possible temperature, at which point the atoms of a substance transmit no thermal energy—they are completely at rest.”
OK. As with the flywheel, a 1 kg object moving at 1 m/s through space has 1 J of kinetic energy. Even if we attribute 6 degrees of freedom to this object, that kinetic energy corresponds to about 1e22 K.
I looked through this thread and there are no links to any sequences. I searched the Wiki for “Jaynes” and there were very few references, only to mind projection fallacy. So if in fact there is any link anywhere to an argument that a pot of water with microscopically known positions and velocities is somehow at 0 K, please just point me to it.
Let me see if I can pick apart your misconceptions.
About the flywheel example, no, rotation does not lead to temperature, because the rotational energy of the flywheel is not thermal energy. You seem to be mixing up thermal with non-thermal energy. In thermodynamics we assign several different kinds of energy to a system:
Total energy: Internal energy + Potential energy + Kinetic energy
Potential energy: Energy due to external force fields (gravity, electromagnetism, etc.)
Kinetic energy: Energy due to motion of the system as a whole (linear motion, rotational motion, etc.)
Internal energy/thermal energy: The energy that is responsible for the temperature of a system.
But here’s the kicker: The division between these concepts is not a fundamental law of nature, but depends on your model. So yes, you could build a model where rotation is included in thermal energy. But then, rotation would be part of the entropy as well, so at nonzero temperature you could not model it as rotating at a fixed speed! You’d have to model the rotation as a random variable. Clearly this contradicts with rotation at a fixed speed. That is, unless you also set the temperature to 0 K, in which entropy would be zero and so you could set the rotation to a fixed speed.
Now about the relationship between internal energy and degrees of freedom. You’re misunderstanding what a degree of freedom is. The equipartition theorem says that the average energy of a particle with n degrees of freedom is nkT/2, but even if you included rotational energy as thermal energy, a large spinning object has much more than one degree of freedom. It has degrees of freedom associated with its many vibrational modes. It has so many vibrational modes that the associated ‘temperature’ is actually very low, not high as you describe. Indeed, if it were to ‘thermalize’ (say, through friction), it would not warm up the object that much. If it were true that the temperature due to rotation is 1e22, then if you let it thermalize it would violate conservation of energy, by tens of orders of magnitude (it would turn into quark-gluon plasma and explode violently, vaporizing half of the planet Earth).
And finally, you cannot calculate absolute energy for an object moving linearly through space. The kinetic energy depends on the rest frame.
Ok, I have a PhD in Applied Physics. I have learned thermo and statistical mechanics a few times including two graduate level courses. I have recently been analyzing internal and external combustion engines as part of my job, and have relearned some parts of thermo for that. It may be that despite my background, I have not done a good job of explaining what is going on with thermo. But what I am explaining here is, at worst, the way a working physicist would see thermo, informed by a science that explains a shitload of reality, and in a way which is no more subjective than the “spooky action at a distance” of electromagnetic and gravitational fields. I realize appealing to my credentials is hardly an argument. However, I am pretty sure that I am right and I am pretty sure that what I have been claiming are all within spitting distance of discussions and examples of thermo and stat mech calculations and considerations that we really talked about when I was learning this stuff.
My confidence in my position is not undermined by anything you have said, so far. I have asked you for a link to something with some kind of detail that explicates the 0 K 0 entropy boiling water position, or some version of the broken concepts you are speaking generally about. You have referred only to things already linked in the thread, or in the sequence on this topic, and i have found no links in the thread that were relevant. I have asked you again to link me to something and you haven’t.
But despite your not giving me anything to work with from your side, I have believed I understand what you are claiming. For the entropy side I would characterize it this way. Standard entropy makes a list of all states at the appropriate energy of an isolated system and say there is equal probability of the system being in any of these. And so the entropy at this energy of this isolated system is log(N(E)) where N(E) is the number of states that have energy E.
I think what you are saying is that if you have detailed knowledge of which state the system is in now, then with the details you have you can predict the exact trajectory of the system through state space, and so the number of states the system can be in is 1 because you KNOW which one it must be in. And so its entropy is 0.
A version of my response would be: so you know which state the system is at any instant of time and so you feel like the entropy is log(1) at any instant in time. But the system still evolves through time through all the enumerated states. And its entropy is log(N(E)), the count of states it evolves through, and it is unchanged that you know at each instant which state it is in. So I know the details of every collision because I follow the motions in detail, but every collision results in the system changing states as every collision changes the direction and speed of two molecules in the system, and over some short time, call it a thermalization time, the system explores nearly all N(E) states. So despite our superior knowledge that gives us the time-sequence of how the system changes from state to state and when, it still explores N(E) states, and its properties of melting ice or pushing pistons is still predictable purely from knowledge of N(E), and is not helped or hurt by a detailed knowledge of the time evolution of the system, the details of how it goes about xploring all N(E) states.
I have just reread this article on Maxwell’s Demons. I note that at no point do they deviate from the classic definitions of temperature and entropy. And indeed, the message seems to be that once the demon is part of the system, the system grows classical entropy exactly as predicted, the demons themselves are engines producing the entropy increases needed to balance all equations.
I said rotation or movement of a rigid body. By definition a rigid body doesn’t have modes of vibration in it. Of course you may think that all real bodies are not truly rigid as they are made out of molecules. But if the macroscopic motion is only weakly coupled to the vibrational modes of the material it is made of, then this is essentially saying the macroscopic and vibrational systems are insulated from each other, and so maintain there own internal temperatures which can be different from each other. Just as two gases separated by a heat-insulating wall can be at different temperatures, a feature we find often used in thermodynamic calculations.
You actually asked me to “Forget a motor, just imagine an object at 0 K moving linearly through outer space.” And so I used the example you asked me to use.
Credentials aren’t very relevant here, but if we’re going to talk about them, I have a PhD in engineering and a BS in math (minor in physics).
Again, as I’ve pointed out at least once before, entropy is not subjective. Being dependent on model and information does not mean it is subjective.
Right off the bat, this is wrong. In a continuous system the state space could be continuous (uncountably infinite) and so N(E) makes no sense. “Logarithm of the number of states of the system” is just a loose way of describing what entropy is, not a precise way.
The number of states a system can be in is always 1! A system (a classical system, at least) can never be in more than one state at a time. The ‘number of states’, insofar as it is loosely used, means the size of the state space according to our model and our information about the system.
There are several things wrong with this. First of all, it assumes the ergodic hypothesis (time average = space average) and the ergodic hypothesis is not required for thermodynamics to work (although it does make a lot of physical systems easier to analyze). But it also has another problem in that it makes entropy dependent on time scale. That is, choosing a fine time scale would decrease entropy. This is not how entropy works. And at any rate, it’s not what entropy measures anyway.
But I’m not assuming a rigid body. You are. There is no reason to assume a rigid body. I offered an example of a cold flywheel turning a hot flywheel, as a system where energy moves from a cold object to a hot object. You decided for some reason that the flywheels must be rigid bodies. They aren’t, at least not in my example.
A finite system at finite energy has a finite number of states in quantum. So if we restrict ourselves only to any kind of situation which could ever be realized by human investigators in our universe, conclusions reached using discrete states are valid.
No, I am considering all possible states N(E) of the system at energy E. Many of these states will be highly spatially anisotropic, and I am still including them in the count.
Since you won’t show me in any detail the calculation that leads to water having 0 temperature or 0 energy if you have special knowledge of it, I can only work from my guesses about what you are talking about. And my guess is that you achieve low entropy, 0 entropy, because with sufficient special knowledge you reduce the number of possible states to 1 at any instant, the state that the system is actually in at that instant. But if you count the number of states the system has been in as time goes by, ever time two things collide and change velocity you bounce to another state, and so even with perfect knowledge of the time evolution over a long enough time, you still cover all possible N(E) states. But over an insufficiently long time you cover a smaller number of states. In fact, the behavior of states looked at on time-scales too short to get “thermalization,” that is too short to allow the system to change through a significant fraction of the available states might possibly be describably with an entropy that depended on time, but the last thing I want to do is define new things and call them entropy when they do not have the properties of the classic entropy I have been advocating for through this entire thread.
Given the length of this thread, I think it would be better if you read all the sentences in each paragraph rather than responding to one out of context.
Seriously, can’t you give me an example of your 0 K 0 entropy boiling water and tell me what you hope to know from this example that we don’t know already? We have probably gotten most of what we can get from an open-ended discussion of philosophy of thermodynamics. A real example from you would certainly restrict the field of discussion, possibly to something even worth doing. Who knows, I might look at what you have and agree with your conclusions.
Nope, sorry.
Also, I still don’t buy the claim about the temperature. You said in the linked comment that putting a known-microstate cup of tea in contact with an unknown-microstate cup of tea wouldn’t really be thermal equilibrium because it would be “not using all the information at your disposal. And if you don’t use the information it’s as if you didn’t have it.”
If I know the exact state of a cup of tea, and am able to predict how that state will evolve in the future, the cup of tea has zero entropy.
Then suppose I take a glass of water that is Boltzmann-distributed. It has some spread over possible microstates—the bigger the spread, the higher entropy (And also temperature, for Boltzmann-distributed things).
Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea). The combined sytem is no longer Boltzmann in either subsytem, and has the same entropy as the original glass of water, just moved into different microstates.
Note that it didn’t matter what the water’s temperature was—all that mattered was that the tea’s distribution had zero entropy. The fact that there has been no increase in entropy is the proof that all the information has been used. If the water had the same average energy as the tea, so that no macroscopic amount of energy was exchanged, then these thing would be in thermal equilibrium by your standards.
After you put the glass of water in contact with the cup of tea, you will quickly become uncertain about the state of the tea. In order to still know the microstate, you need to be fed more information.
If you have a Boltzmann distribution, you still know all the microstates—you just have a probability distribution over them. Time evolution in contact with a zero-entropy object moves probability from one microstate to another in a predictable way, with neither compression nor spreading of the probability distribution.
Sure, this requires obscene amounts of processing power to keep track of, but not particularly more than it took to play Maxwell’s demon with a known cup of tea.
That’s wrong on both counts.
Firstly, even if you actually had a block of ice at 0 K and put it in thermal contact with a warm glass of water, the total system entropy would increase over time. It is completely false that the number of initial and final microstates are the same. Entropy depends on volume as well as temperature. (To see why this is the case, consider that you’re dealing with a continuous phase space, not a discrete one).
Additionally, your example doesn’t apply to what I’m talking about, because nowhere are you using the information about the cup of tea. Again, as I said, if you don’t use the information it’s as if you didn’t have it.
I am fully aware that saying it in this way is clumsy and hard to understand (and not 100% convincing, even though it really is true). That’s why I’m looking for a more abstract, theoretical way of saying it.
I’m not really sure why you say volume is changing here.
I don’t understand how you want information to be used, if not to calculate a final distribution over microstates, or what you think “losing information” is if not an increase in entropy. If we’re having some sort of disconnect I’d be happy to talk more, but if you’re trolling me I would like to not be trolled.
Think about putting a packet of gas next to a vacuum and allowing it to expand. In this case it’s even easier to see that the requirements of your thought experiment hold—you know the exact state of the vacuum, because it has no microstates. Yet the total system entropy will still increase as the molecules of gas expand to fill the vacuum. Even if you have perfect information about the gas at the beginning (zero entropy), at the end of the experiment you will not. You will have some uncertainty. This is because the phase space itself has expanded.
I think we are. I suggest becoming familiar with R Landauer and C H Bennet’s work. I’d be happy to discuss this further if we are on the same page.
Oh, I see, you’re thinking of particle exchange, like if one dumped the water into the tea. This case is not what I intended—by thermal contact I just mean exchange of energy.
With identical particles, the case with particle exchange gets complicated. There might even be some interesting physics there.
The thermodynamics of energy exchange and mass exchange are actually similar. You still get the increase in entropy, even if you are just exchanging energy.
One the one hand, this is a good point that points out a weakness in my argument—if states are continuous rather than discrete, one can increase or decrease entropy even with deterministic time-evolution by spreading out or squeezing probability mass.
But I don’t know how far outside the microcanonical this analogy you’re making holds. Exchanging energy definitely works like exchanging particles when all you know is the total energy, but there’s no entropy increase when both are in a single microstate, or when both have the same Boltzmann distribution (hm, or is there?).
I’ll think about it too.
The lesson is that statistical methods are superfluous if you know everything with certainty. It’s worth noting that classical mechanics is completely symmetric with respect to time (does not have a distinguished “arrow of time”), whereas thermodynamics has a definite arrow of time. You run into problems if you assume that everything behaves classically and try to apply thermodynamic notions.
Landau and Lifshitz’s Statistical Physics has some discussion of issues with entropy.
I understand what you’re saying and I agree. Though it’s worth mentioning that the ‘arrow of time’ in thermodynamics actually doesn’t exist for closed, reversible systems.
I’m pretty sure Manfred is right. You drop a block of ice of unknown configuration into a cup of tea of known configuration, then your uncertainty about the system will grow over time. Of course entropy != temperature. You coudl say that the tea has zero entropy, but not zero temperature.
But what’s the point of this thought exercise?
The block of ice is not of unknown configuration. The block of ice in my example is at 0 K, which means it has zero entropy (all molecules rigidly locked in a regular periodic lattice) and thus its configuration is completely known.