Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.
I think I’ve figured out what’s bothering me about this. If we think of temperature in terms of our uncertainty about where the system is in phase space, rather than how large a region of phase space fits the macroscopic state, then we gain a little in using the second law, but give up a lot everywhere else. Unless I am mistaken, we lose the following:
Heat flows from hot to cold
Momentum distribution can be predicted from temperature
Phase changes can be predicted from temperature
The reading on a thermometer can be predicted from temperature
I’m sure there are others. I realize that if we know the full microscopic state of a system, then we don’t need to use temperature for these things, but then we wouldn’t need to use temperature at all.
if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.
If you’re able to do this, I don’t see why you’d be using temperature at all, unless you want to talk about how hot the water is to begin with (as you did), in which case you’re referring to the temperature that the water would be if we had no microscopic information.
We don’t lose those things. Remember, this isn’t my definition. This is the actual definition of temperature used by statistical physicists. Anything statistical physics predicts (all of the things you listed) is predicted by this definition.
You’re right though. If you know the state of the molecules in the water then you don’t need to think about temperature. That’s a feature, not a bug.
Suppose that you boil some water in a pot. You take the pot off the stove, and then take a can of beer out of the cooler (which is filled with ice) and put it in the water. The place where you’re confusing your friends by putting cans of beer in pots of hot water is by the ocean, so when you read the thermometer that’s in the water, it reads 373 K. The can of beer, which was in equilibrium with the ice at a measured 273 K, had some bits of ice stuck to it when you put it in. They melt. Next, you pull out your fancy laser-doppler-shift-based water molecule momentum spread measurer. The result jives with 373 K liquid water. After a short time, you read the thermometer as 360 K (the control pot with no beer reads 371 K). There is no ice left in the pot. You take out the beer, open it, and measure it’s temperature to be 293 K and its momentum width to be smaller than that of the boiling water.
What we observed was:
Heat flowed from 373 K water to 273 K beer
The momentum distribution is wider for water at 373 K than at 293 K
Ice placed in 373 K water melts
Our thermometer reads 373 K for boiling water and 273 K for water-ice equilibrium
Now, suppose we do exactly the same thing, but just after putting the beer in the water, Omega tells us the state of every water molecule in the pot, but not the beer. Now we know the temperature of the water is exactly 0 K. We still anticipate the same outcome (perhaps more precisely), and observe the same outcome for all of our measurements, but we describe it differently:
Heat flowed from 0 K water to 273 K beer
The momentum distribution is wider for water at 0 K (or recently at 0 K) than at 293 K
Ice placed in 0 K water melts
Our thermometer reads 373 K for water boiling at 0 K, and 273 K for water-ice equilibrium
So the only difference is in the map, not the territory, and it seems to be only in how we’re labeling the map, since we anticipate the same outcome using the same model (assuming you didn’t use the specific molecular states in your prediction).
Remember, this isn’t my definition. This is the actual definition of temperature used by statistical physicists.
I agree that temperature should be defined so that 1/T = dS/dE . This is the definition that, as far as I can tell, all physicists use. But nearly every result that uses temperature is derived using the assumption that all microstates are equally probable (your second law example being the only exception that I am aware of). In fact, this is often given as a fundamental assumption of statistical mechanics, and I think this is what makes the “glass of water at absolute zero” comment confusing. (Moreover, many physicists, such as plasma physicists, will often say that the temperature is not well-defined unless certain statistical conditions are met, like the energy and momentum distributions having the correct form, or the system being locally in thermal equilibrium with itself.)
I’m having trouble with brevity here, but what I’m getting at is that if you want to show that we can drop the fundamental postulate of statistical mechanics, and still recover the second law of thermodynamics, then I’m, happy to call it a feature rather than a bug. But it seems like bringing in temperature confuses the issue rather than clarifying it.
Omega tells us the state of the water at time T=0, when we put the beer into it. There are two ways of looking at what happens immediately after.
The first way is that the water doesn’t flow heat into the beer, rather it does some work on it. If we know the state of the beer/water interface as well then we can calculate exactly what will happen. It will look like quick water molecules thumping into slow boundary molecules and doing work on them. This is why the concept of temperature is no longer necessary: if we know everything then we can just do mechanics. Unfortunately, we don’t know everything about the full system, so this won’t quite work.
Think about your uncertainty about the state of the water as you run time forward. It’s initially zero, but the water is in contact with something that could be in any number of states (the beer), and so the entropy of the water is going to rise extremely quickly.
The water will initially be doing work on the beer, but after an extremely short time it will be flowing heat into it. One observer’s work is another’s heat, essentially.
The rule that all microstates that are consistent with a given macrostate are equally probable is a consequence of the maximum entropy principle. See this Jaynes paper.
I think I’ve figured out what’s bothering me about this. If we think of temperature in terms of our uncertainty about where the system is in phase space, rather than how large a region of phase space fits the macroscopic state, then we gain a little in using the second law, but give up a lot everywhere else. Unless I am mistaken, we lose the following:
Heat flows from hot to cold
Momentum distribution can be predicted from temperature
Phase changes can be predicted from temperature
The reading on a thermometer can be predicted from temperature
I’m sure there are others. I realize that if we know the full microscopic state of a system, then we don’t need to use temperature for these things, but then we wouldn’t need to use temperature at all.
If you’re able to do this, I don’t see why you’d be using temperature at all, unless you want to talk about how hot the water is to begin with (as you did), in which case you’re referring to the temperature that the water would be if we had no microscopic information.
We don’t lose those things. Remember, this isn’t my definition. This is the actual definition of temperature used by statistical physicists. Anything statistical physics predicts (all of the things you listed) is predicted by this definition.
You’re right though. If you know the state of the molecules in the water then you don’t need to think about temperature. That’s a feature, not a bug.
Suppose that you boil some water in a pot. You take the pot off the stove, and then take a can of beer out of the cooler (which is filled with ice) and put it in the water. The place where you’re confusing your friends by putting cans of beer in pots of hot water is by the ocean, so when you read the thermometer that’s in the water, it reads 373 K. The can of beer, which was in equilibrium with the ice at a measured 273 K, had some bits of ice stuck to it when you put it in. They melt. Next, you pull out your fancy laser-doppler-shift-based water molecule momentum spread measurer. The result jives with 373 K liquid water. After a short time, you read the thermometer as 360 K (the control pot with no beer reads 371 K). There is no ice left in the pot. You take out the beer, open it, and measure it’s temperature to be 293 K and its momentum width to be smaller than that of the boiling water.
What we observed was:
Heat flowed from 373 K water to 273 K beer
The momentum distribution is wider for water at 373 K than at 293 K
Ice placed in 373 K water melts
Our thermometer reads 373 K for boiling water and 273 K for water-ice equilibrium
Now, suppose we do exactly the same thing, but just after putting the beer in the water, Omega tells us the state of every water molecule in the pot, but not the beer. Now we know the temperature of the water is exactly 0 K. We still anticipate the same outcome (perhaps more precisely), and observe the same outcome for all of our measurements, but we describe it differently:
Heat flowed from 0 K water to 273 K beer
The momentum distribution is wider for water at 0 K (or recently at 0 K) than at 293 K
Ice placed in 0 K water melts
Our thermometer reads 373 K for water boiling at 0 K, and 273 K for water-ice equilibrium
So the only difference is in the map, not the territory, and it seems to be only in how we’re labeling the map, since we anticipate the same outcome using the same model (assuming you didn’t use the specific molecular states in your prediction).
I agree that temperature should be defined so that 1/T = dS/dE . This is the definition that, as far as I can tell, all physicists use. But nearly every result that uses temperature is derived using the assumption that all microstates are equally probable (your second law example being the only exception that I am aware of). In fact, this is often given as a fundamental assumption of statistical mechanics, and I think this is what makes the “glass of water at absolute zero” comment confusing. (Moreover, many physicists, such as plasma physicists, will often say that the temperature is not well-defined unless certain statistical conditions are met, like the energy and momentum distributions having the correct form, or the system being locally in thermal equilibrium with itself.)
I’m having trouble with brevity here, but what I’m getting at is that if you want to show that we can drop the fundamental postulate of statistical mechanics, and still recover the second law of thermodynamics, then I’m, happy to call it a feature rather than a bug. But it seems like bringing in temperature confuses the issue rather than clarifying it.
Omega tells us the state of the water at time T=0, when we put the beer into it. There are two ways of looking at what happens immediately after.
The first way is that the water doesn’t flow heat into the beer, rather it does some work on it. If we know the state of the beer/water interface as well then we can calculate exactly what will happen. It will look like quick water molecules thumping into slow boundary molecules and doing work on them. This is why the concept of temperature is no longer necessary: if we know everything then we can just do mechanics. Unfortunately, we don’t know everything about the full system, so this won’t quite work.
Think about your uncertainty about the state of the water as you run time forward. It’s initially zero, but the water is in contact with something that could be in any number of states (the beer), and so the entropy of the water is going to rise extremely quickly.
The water will initially be doing work on the beer, but after an extremely short time it will be flowing heat into it. One observer’s work is another’s heat, essentially.
This actually clears things up quite a lot. I think my discomfort with this description is mainly aesthetic. Thank you for being patient.
The rule that all microstates that are consistent with a given macrostate are equally probable is a consequence of the maximum entropy principle. See this Jaynes paper.