[Spoiler alert: I can’t find any ‘spoiler’ mode for comments, so I’m just going to give the answers here, after a break, so collapse the comment if you don’t want to see that]
Not quite, but close. It should be a + instead of a—in the denominator. Nice work, though.
You have the right formula for the entropy. Notice that it is nearly identical to the Bernoulli distribution entropy. That should make sense: there is only one state with energy 0 or Nε, so the entropy should go to 0 at those limits. It’s maximum is at Nε/2. Past that point, adding energy to the system actually decreases entropy. This leads to a negative temperature!
But we can’t actually reach that by raising its temperature. As we raise temperature to infinity, energy caps at Nε/2 (specific heat goes to 0). To put more energy in, we have to actually find some particles that are switched off and switch them on. We can’t just put it in equilibrium with a hotter thing.
What gives a better intuition is thinking in inverse temperature.
Regular temperature is, ‘how weakly is this thing trying to grab more energy so as to increase its entropy’.
Inverse temperature is ‘how strongly...’ and when that gets down to 0, it’s natural to see it continue on into negatives, where it’s trying to shed energy to increase its entropy.
The energy/entropy plot makes total sense, the energy/temperature doesn’t really because I don’t have a good feel for what temperature actually is, even after reading the “Temperature” section of your argument (it previously made sense because Mathematica was only showing me the linear-like part of the graph). Can you recommend a good text to improve my intuition? Bonus points if this recommendation arrives in the next 9.5 hours, because then I can get the book from my university library.
Depends on your background in physics. Landau & Lifshitz Statistical Mechanics is probably the best, but you won’t get much out of it if you haven’t taken some physics courses.
I gave this a shot as well as since your value for E(T) → ∞ as T → ∞, while I would think the system should cap out at εN.
I get a different value for S(E), reasoning:
If E/ε is 1, there are N microstates, since 1 of N positions is at energy ε.
If E/ε is 2, there are N(N-1) microstates.
etc. etc, giving for E/ε = x that there are N!/(N-x)!
so S = ln [N!/(N-x)!] = ln(N!) - ln((N-x)!) = NlnN - (N-x)ln(N-x)
S(E) = N ln N - (N—E/ε) ln (N—E/ε)
Can you explain how you got your equation for the entropy?
Going on I get E(T) = ε(N—e^(ε/T − 1) )
This also looks wrong, as although E → ∞ as T → ∞, it also doesn’t cap at exactly εN, and E → -∞ for T→ 0...
I’m expecting the answer to look something like: E(T) = εN(1 - e^(-ε/T))/2 which ranges from 0 to εN/2, which seems sensible.
EDIT: Nevermind, the answer was posted while I was writing this. I’d still like to know how you got your S(E) though.
S(E) is the log of the number of states in phase space that are consistent with energy E. Having energy E means that E/ε particles are excited, so we get (N choose E/ε) states. Now take the log :)
[Spoiler alert: I can’t find any ‘spoiler’ mode for comments, so I’m just going to give the answers here, after a break, so collapse the comment if you don’t want to see that]
.
.
.
.
.
.
.
.
.
.
For the entropy (in natural units), I get
%20=%20N%20\ln%20N%20-%20\frac{E}{\epsilon}%20\ln%20\frac{E}{\epsilon}%20-%20\left(%20N%20-%20\frac{E}{\epsilon}%20\right)%20\ln%20\left(%20N%20-%20\frac{E}{\epsilon}%20\right))and for the energy, I get
%20=%20\frac{\epsilon%20N}{e%5E{\epsilon%20/%20T}%20-%201})Is this right? (upon reflection and upon consulting graphs, it seems right to me, but I don’t trust my intuition for statistical mechanics)
Not quite, but close. It should be a + instead of a—in the denominator. Nice work, though.
You have the right formula for the entropy. Notice that it is nearly identical to the Bernoulli distribution entropy. That should make sense: there is only one state with energy 0 or Nε, so the entropy should go to 0 at those limits. It’s maximum is at Nε/2. Past that point, adding energy to the system actually decreases entropy. This leads to a negative temperature!
But we can’t actually reach that by raising its temperature. As we raise temperature to infinity, energy caps at Nε/2 (specific heat goes to 0). To put more energy in, we have to actually find some particles that are switched off and switch them on. We can’t just put it in equilibrium with a hotter thing.
I made a plot of the entropy and the (correct) energy. Every feature of these plots should make sense.
Note that the exponential turn-on in E(T) is a common feature to any gapped material. Semiconductors do this too :)
Why did you only show the E(T) function for positive temperatures?
This is a good point. The negative side gives good intuition for the “negative temperatures are hotter than any positive temperature” argument.
What gives a better intuition is thinking in inverse temperature.
Regular temperature is, ‘how weakly is this thing trying to grab more energy so as to increase its entropy’.
Inverse temperature is ‘how strongly...’ and when that gets down to 0, it’s natural to see it continue on into negatives, where it’s trying to shed energy to increase its entropy.
No reason. Fixed.
The energy/entropy plot makes total sense, the energy/temperature doesn’t really because I don’t have a good feel for what temperature actually is, even after reading the “Temperature” section of your argument (it previously made sense because Mathematica was only showing me the linear-like part of the graph). Can you recommend a good text to improve my intuition? Bonus points if this recommendation arrives in the next 9.5 hours, because then I can get the book from my university library.
Depends on your background in physics. Landau & Lifshitz Statistical Mechanics is probably the best, but you won’t get much out of it if you haven’t taken some physics courses.
I gave this a shot as well as since your value for E(T) → ∞ as T → ∞, while I would think the system should cap out at εN.
I get a different value for S(E), reasoning:
If E/ε is 1, there are N microstates, since 1 of N positions is at energy ε. If E/ε is 2, there are N(N-1) microstates. etc. etc, giving for E/ε = x that there are N!/(N-x)!
so S = ln [N!/(N-x)!] = ln(N!) - ln((N-x)!) = NlnN - (N-x)ln(N-x)
S(E) = N ln N - (N—E/ε) ln (N—E/ε)
Can you explain how you got your equation for the entropy?
Going on I get E(T) = ε(N—e^(ε/T − 1) )
This also looks wrong, as although E → ∞ as T → ∞, it also doesn’t cap at exactly εN, and E → -∞ for T→ 0...
I’m expecting the answer to look something like: E(T) = εN(1 - e^(-ε/T))/2 which ranges from 0 to εN/2, which seems sensible.
EDIT: Nevermind, the answer was posted while I was writing this. I’d still like to know how you got your S(E) though.
S(E) is the log of the number of states in phase space that are consistent with energy E. Having energy E means that E/ε particles are excited, so we get (N choose E/ε) states. Now take the log :)