I’m going to disagree with you here. Not that energy doesn’t depend on our models. It just depends on them in a very different way. The entropy of a physical system is the Shannon entropy of its distribution of ‘microstates’. But there is no distribution of microstates ‘out there’. It’s a construction that purely exists in our models. Whereas energy does exist ‘out there’. It’s true that no absolute value can be given for energy and that it’s relative, but in a way energy is far more ‘real’ than entropy.
Potential energy depends on what you set the zero level to, but I agree that this is very different than entropy. In particular, the difference in energy between two systems is well-defined.
“Out there” are fields, particles, interacting, moving, bumping into each other, turning into each other. Energy is a convenient description of some part of this process in many models. Just like with Jaynes’ entropy, knowing more about the system changes its energy. For example, just like knowing about isotopes affects the calculated entropy of a mixed system, knowing about nuclear forces changes the calculated potential energy of the system.
I agree with passive_fist, and my argument hasn’t changed since last time.
If we learn that energy changes in some process, then we are wrong about the laws that the system is obeying. If we learn that entropy goes down, then we can still be right about the physical laws, as Jaynes shows.
Another way: if we know the laws, then energy is a function of the individual microstate and nothing else, while entropy is a function of our probability distribution over the microstates and nothing else.
I agree that it feels different. It certainly does to me. Energy feels real, while entropy feels like an abstraction. A rock falling on one’s head is a clear manifestation of its potential (turned kinetic) energy, while getting burned by a hot beverage does not feel like a manifestation of the entropy increase. it feels like the beverage’s temperature is to blame. On the other hand, if we knew precisely the state of every water molecule in the cup, would we still get burned? The answer is not at all obvious to me. Passive_fist claims that the cup would appear to be a absolute zero then:
In the limit of perfect microstate knowledge, the system has zero entropy and is at absolute zero.
I do not know enough stat mech to assess this claim, but it seems wrong to me, unless the claim is that we cannot know the state of the system unless it’s already at absolute zero to begin with. I suppose a toy model with only a few particles present might shed some light on the issue. Or a link to where the issue is discussed.
An easy toy system is a collection of perfect billiard balls on a perfect pool table, that is, one without rolling friction and where all collisions conserve energy. For a few billiard balls it would be quite easy to extract all of their energy as work if you know their initial positions and velocities. There are plenty of ways to do it, and it’s fun to think of them. This means they are at 0 temperature.
If you don’t know the microstate, but you do know the sum of the square of their velocities, which is a constant in all collisions, you can still tell some things about the process. For instance, you can predict the average number of collisions with one wall and the corresponding energy, related to the pressure. If you stick your hand on the table for five seconds, what is the chance you get hit by a ball moving faster than some value that will cause pain? All these things are probabilistic.
In the limit of tiny billiard balls compared to pool table size, this is the ideal gas.
If you know precisely the state of every water molecule in the system, there’s no need for your finger to get burned. Just touch your finger to the cup whenever a slow-moving molecule is approaching, and remove it whenever a fast-moving molecule is approaching (Maxwell’s demon).
Right, supposing you can have a macroscopic Maxwell’s demon. So the claim is not that it is necessarily at absolute zero, but that it does not have a well-defined temperature, because you can choose it to behave (with respect to your finger) as if it were at any temperature you like. Is this what you are saying?
Temperature is the thermodynamic quantity that is shared by systems in equilibrium. “Cup of tea + information about all the molecules in the cup of tea” is in thermodynamic equilibrium with “Ice cube + kinetic energy (e.g. electricity)”, in that you can arrange a system where the two are in contact but do not exchange any net energy.
Basically, if you, say, try to use the information about the water and a Demon to put the system in thermal equilibrium with some warm water and electricity, you’ll either be prevented by conservation of energy or you’ll wind up not using all the information at your disposal. And if you don’t use the information it’s as if you didn’t have it.
The salient point is that the system is not in thermal equilibrium with anything ‘warmer’ than “Ice cube + free energy.”
If you know everything about the cup of tea, it really is at absolute zero, in the realest sense you could imagine.
Expanding on the billiard ball example: lets say one part of the wall of the pool table adds some noise to the trajectory of the balls that bounce off of that spot, but doesn’t sap energy from them on average. After a while we won’t know the exact positions of the balls at an arbitrary time given only their initial positions and momenta. That is, entropy has entered our system through that part of the wall. I know this language makes it sound like entropy is in the system, flowing about, but if we knew the exact shape of the wall at that spot then it wouldn’t happen.
Even with this entropy entering our system, the energy remains constant. This is why total energy is a wonderful macrovariable for this system. Systems where this works are usually easily solved as a microcanonical ensemble. If, instead, that wall spot was at a fixed temperature, we would use the canonical ensemble.
Again, this is very different from the situation with entropy. I think you’re confusing two meanings of the word ‘model’. It’s one thing to have an incomplete description of the physics of the system (for instance, lacking nuclear forces, as you describe). It’s another to lack knowledge about the internal microstates of the system, even if all relevant physics are known. (In the statistics view, these two meanings are analogous to the ‘model’ and the ‘parameters’, respectively). Entropy measures the uncertainty in the distribution of the parameters. It measures something about our information about the system. The most vivid demonstration of this is that entropy changes the more you know about the parameters (microstates) of the system. In the limit of perfect microstate knowledge, the system has zero entropy and is at absolute zero. But energy (relative to ground state) doesn’t change no matter how much information you gain about a system’s internal microstates.
I’m going to disagree with you here. Not that energy doesn’t depend on our models. It just depends on them in a very different way. The entropy of a physical system is the Shannon entropy of its distribution of ‘microstates’. But there is no distribution of microstates ‘out there’. It’s a construction that purely exists in our models. Whereas energy does exist ‘out there’. It’s true that no absolute value can be given for energy and that it’s relative, but in a way energy is far more ‘real’ than entropy.
Potential energy depends on what you set the zero level to, but I agree that this is very different than entropy. In particular, the difference in energy between two systems is well-defined.
“Out there” are fields, particles, interacting, moving, bumping into each other, turning into each other. Energy is a convenient description of some part of this process in many models. Just like with Jaynes’ entropy, knowing more about the system changes its energy. For example, just like knowing about isotopes affects the calculated entropy of a mixed system, knowing about nuclear forces changes the calculated potential energy of the system.
I agree with passive_fist, and my argument hasn’t changed since last time.
If we learn that energy changes in some process, then we are wrong about the laws that the system is obeying. If we learn that entropy goes down, then we can still be right about the physical laws, as Jaynes shows.
Another way: if we know the laws, then energy is a function of the individual microstate and nothing else, while entropy is a function of our probability distribution over the microstates and nothing else.
I agree that it feels different. It certainly does to me. Energy feels real, while entropy feels like an abstraction. A rock falling on one’s head is a clear manifestation of its potential (turned kinetic) energy, while getting burned by a hot beverage does not feel like a manifestation of the entropy increase. it feels like the beverage’s temperature is to blame. On the other hand, if we knew precisely the state of every water molecule in the cup, would we still get burned? The answer is not at all obvious to me. Passive_fist claims that the cup would appear to be a absolute zero then:
I do not know enough stat mech to assess this claim, but it seems wrong to me, unless the claim is that we cannot know the state of the system unless it’s already at absolute zero to begin with. I suppose a toy model with only a few particles present might shed some light on the issue. Or a link to where the issue is discussed.
An easy toy system is a collection of perfect billiard balls on a perfect pool table, that is, one without rolling friction and where all collisions conserve energy. For a few billiard balls it would be quite easy to extract all of their energy as work if you know their initial positions and velocities. There are plenty of ways to do it, and it’s fun to think of them. This means they are at 0 temperature.
If you don’t know the microstate, but you do know the sum of the square of their velocities, which is a constant in all collisions, you can still tell some things about the process. For instance, you can predict the average number of collisions with one wall and the corresponding energy, related to the pressure. If you stick your hand on the table for five seconds, what is the chance you get hit by a ball moving faster than some value that will cause pain? All these things are probabilistic.
In the limit of tiny billiard balls compared to pool table size, this is the ideal gas.
If you know precisely the state of every water molecule in the system, there’s no need for your finger to get burned. Just touch your finger to the cup whenever a slow-moving molecule is approaching, and remove it whenever a fast-moving molecule is approaching (Maxwell’s demon).
Right, supposing you can have a macroscopic Maxwell’s demon. So the claim is not that it is necessarily at absolute zero, but that it does not have a well-defined temperature, because you can choose it to behave (with respect to your finger) as if it were at any temperature you like. Is this what you are saying?
Well, no.
Temperature is the thermodynamic quantity that is shared by systems in equilibrium. “Cup of tea + information about all the molecules in the cup of tea” is in thermodynamic equilibrium with “Ice cube + kinetic energy (e.g. electricity)”, in that you can arrange a system where the two are in contact but do not exchange any net energy.
Note that it is NOT in thermodynamic equilibrium with anything hotter than an ice cube, as Eliezer described in spxtr’s linked article: http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/
Basically, if you, say, try to use the information about the water and a Demon to put the system in thermal equilibrium with some warm water and electricity, you’ll either be prevented by conservation of energy or you’ll wind up not using all the information at your disposal. And if you don’t use the information it’s as if you didn’t have it.
The salient point is that the system is not in thermal equilibrium with anything ‘warmer’ than “Ice cube + free energy.”
If you know everything about the cup of tea, it really is at absolute zero, in the realest sense you could imagine.
Hm. I have to think more about this.
Expanding on the billiard ball example: lets say one part of the wall of the pool table adds some noise to the trajectory of the balls that bounce off of that spot, but doesn’t sap energy from them on average. After a while we won’t know the exact positions of the balls at an arbitrary time given only their initial positions and momenta. That is, entropy has entered our system through that part of the wall. I know this language makes it sound like entropy is in the system, flowing about, but if we knew the exact shape of the wall at that spot then it wouldn’t happen.
Even with this entropy entering our system, the energy remains constant. This is why total energy is a wonderful macrovariable for this system. Systems where this works are usually easily solved as a microcanonical ensemble. If, instead, that wall spot was at a fixed temperature, we would use the canonical ensemble.
Again, this is very different from the situation with entropy. I think you’re confusing two meanings of the word ‘model’. It’s one thing to have an incomplete description of the physics of the system (for instance, lacking nuclear forces, as you describe). It’s another to lack knowledge about the internal microstates of the system, even if all relevant physics are known. (In the statistics view, these two meanings are analogous to the ‘model’ and the ‘parameters’, respectively). Entropy measures the uncertainty in the distribution of the parameters. It measures something about our information about the system. The most vivid demonstration of this is that entropy changes the more you know about the parameters (microstates) of the system. In the limit of perfect microstate knowledge, the system has zero entropy and is at absolute zero. But energy (relative to ground state) doesn’t change no matter how much information you gain about a system’s internal microstates.
I understand what you are saying, but I am not convinced that there is a big difference.
How would you change this uncertainty without disturbing the system?
How would you gain this information without disturbing the system (and hence changing its energy)?
EDIT: see also my reply to spxtr.
You have to define what ‘disturbing the system’ means. This is just the classical Maxwell’s demon question, and you can most definitely change this uncertainty without changing the thermodynamics of the system. Look at http://en.wikipedia.org/wiki/Maxwell%27s_demon#Criticism_and_development
Especially, the paragraph about Landauer’s work is relevant (and the cited Scientific American article is also interesting).