As before, we will consider particles moving in boxes in an abstract and semi-formal way.
Imagine we have two types of particle, red and blue, in a box. Imagine they can change colour freely, and as before let’s forget as much as possible. Our knowledge over particle states now looks like:
Let’s connect these particles to a second box, but also introduce a colour-changing gate to the passage between the boxes. Particles can only go through the gate if they’re willing to change from red to blue, and only go back if they change colour in the opposite direction.
Without further rules, lots of particles will approach the gate. The blue ones will be turned away, but the red ones happily switch to blue as they move into the second box, and … promptly change back to red. We’ve not done anything interesting. We need a new rule: particles can’t change colour without a good reason, they must swap colour by bumping into each other.
Now the particles that end up the second box must remain blue. Lets think about what happens when we start the box off. Unfortunately this question as posed is un-answerable, because we’ve introduced a systemconserved quantity. Any time we do this, we must specify how much of the quantity is in the system. As an exercise, think about what the conserved quantity is.
To be constant, since the number of particles is constant. Writing things out like this also makes the states of the system more explicit, so we can reason about the system more accurately. We expect the constant number of blue-and-box-1 particles to be distributed evenly throughout box 1, and we expect the constant number of red-or-box-2 particles to be distributed evenly throughout boxes 1 and 2. If (as above) both boxes are equally-sized, this caches out to the following rule:
For n particles starting in box 1, of which m start off red, we expect to end up with n−m blue particles in box 1, m/2 red particles in box 1, and m/2 blue particles in box 1.
Remember, this only works because of the conserved quantity we have induced in our system.
Global Conservation
Now lets imagine that the walls of box 1 are somewhat permeable. Particles on either side cannot cross, but they can swap red-ness with external particles. We’ve now swapped our locally conserved quantity for a globally-conserved quantity.
In this case, we can’t eyeball things anymore. We have to go back to the maths of entropy. We can write the entropy of our system Hsys as the sum of the entropy of each individual particle. The entropy of each particle can then be written as a sum of the entropy arising from our uncertainty over the three states b1,r1,b2 (denoting colour and box) plus the entropy coming from our uncertainty over the position of a particle within a given box, which is constant.
Hsys=n[−pb1lnpb1−pr1lnpr1−pb2lnpr2+Hbox]
Thanks to our previous calculation, we can write all of these in terms of a single probability p, which we will choose to be pr1+pb2=1−pb1=2pr1=2pb2.
Hsys=n[−p2lnp2−p2lnp2−(1−p)ln(1−p)+Hbox]
Hsys=n[−plnp−(1−p)ln(1−p)+pln2+Hbox]
What we actually care about, as it turns out, is the derivative of entropy with respect to this parameter:
dHsysdp=n[−lnp+ln(1−p)+ln2]
Which is almost the same as the derivative of entropy in a simple two-state system where p is the probability of being in one of these states. In a way, we do have a two state system, where the states are [Blue,1],[[Red,1]∨[Blue,2]]. The only difference is that for particles in the state [[Red,1]∨[Blue,2]], there is an extra bit of uncertainty over position, hence the ln2 term. We can think of this system in two equivalent ways:
A three-state system with states [Blue,1],[Red,1],[Blue,2], where P(Red,1)=P(Blue,2). The entropy is just the entropy calculated over all three states.
A two-state system with states [Blue,1],[[Red,1]∨[Blue,2]] with no restrictions on the distribution other that p. The entropy is calculated over two states but an extra pln2 term is added to correct for the intrinsically higher entropy of state 2.
The second one is most commonly done in stat mech, when we have very complex systems. In fact, we have already seen this concept in the last post when we considered the entropy of a particle distributed over two boxes of different size.
Derivatives In Terms of Red-ness
Our previous calculation found the derivative Hsys in terms of p. This is a bad choice, since we don’t want to think of changing p directly. Instead we have to think in terms of changing nr1∨b2 which for short-hand I’ll call m as before. Because m=p×n, and n is constant, we can just divide by n to get the derivative of Hsys in terms of m.
dHsysdm=−lnp+ln(1−p)+ln2
This is even better, since it doesn’t depend on m at all! Now we must consider the external entropy Hext. Lets say we have next→∞ particles outside the box, which have a probability q of being red. If Hout is the entropy coming from the entire exterior of the box, and this is equal between red and blue external particles, we can find the derivative of Hext with respect to q quite simply:
dHextdq=next[−lnq+ln(1−q)]
From which follows the derivative in terms of the number of red particles outside the box mext=next×q:
dHextdmext=−lnq+ln(1−q)
The important part here is that, as next→∞, the derivative of Hext with respect to mext remains constant. For sufficiently large next, we can totally ignore the change in q when mext changes. Finally, if we assume that m+mext is a constant, we can write down the following derivative:
If we want to find the maximum entropy, i.e. forget as much as possible then we want to set this to zero, which gives the following relation:
lnp−ln(1−p)=ln2+lnq−ln(1−q)
lnp1−p=ln2q1−q
p1−p=2q1−q
p=2qq+1
Normally it’s not so easy to solve the relevant equation in terms of obvious parameters of the external world like q. In most cases, we work out the solution in terms of the derivative:
dHextdmext
Which is often easier to measure than you might think! But you’ll have to wait for the next post for that.
Conclusions
When we induce a conserved quantity in our system, we must specify how much of that quantity is present.
When we look at a globally conserved quantity, we must instead specify the derivative of the total external entropy Hext with respect to the total external amount of that quantity.
We can switch between more micro-level views of individual states, and more macro-level views of states with “intrinsic” entropy.
Conserved Quantities (Stat Mech Part 2)
As before, we will consider particles moving in boxes in an abstract and semi-formal way.
Imagine we have two types of particle, red and blue, in a box. Imagine they can change colour freely, and as before let’s forget as much as possible. Our knowledge over particle states now looks like:
[Position]∼Uniform over inside of box
[Colour]∼Uniform over {Red,Blue}
For each particle:
Let’s connect these particles to a second box, but also introduce a colour-changing gate to the passage between the boxes. Particles can only go through the gate if they’re willing to change from red to blue, and only go back if they change colour in the opposite direction.
Without further rules, lots of particles will approach the gate. The blue ones will be turned away, but the red ones happily switch to blue as they move into the second box, and … promptly change back to red. We’ve not done anything interesting. We need a new rule: particles can’t change colour without a good reason, they must swap colour by bumping into each other.
Now the particles that end up the second box must remain blue. Lets think about what happens when we start the box off. Unfortunately this question as posed is un-answerable, because we’ve introduced a system conserved quantity. Any time we do this, we must specify how much of the quantity is in the system. As an exercise, think about what the conserved quantity is.
...
The conserved quantity is:
[Number of Red Particles in Box 1]+[Number of Particles in Box 2]
Or, conversely we could also consider:
[Number of Blue Particles in Box 1]
To be constant, since the number of particles is constant. Writing things out like this also makes the states of the system more explicit, so we can reason about the system more accurately. We expect the constant number of blue-and-box-1 particles to be distributed evenly throughout box 1, and we expect the constant number of red-or-box-2 particles to be distributed evenly throughout boxes 1 and 2. If (as above) both boxes are equally-sized, this caches out to the following rule:
For n particles starting in box 1, of which m start off red, we expect to end up with n−m blue particles in box 1, m/2 red particles in box 1, and m/2 blue particles in box 1.
Remember, this only works because of the conserved quantity we have induced in our system.
Global Conservation
Now lets imagine that the walls of box 1 are somewhat permeable. Particles on either side cannot cross, but they can swap red-ness with external particles. We’ve now swapped our locally conserved quantity for a globally-conserved quantity.
In this case, we can’t eyeball things anymore. We have to go back to the maths of entropy. We can write the entropy of our system Hsys as the sum of the entropy of each individual particle. The entropy of each particle can then be written as a sum of the entropy arising from our uncertainty over the three states b1,r1,b2 (denoting colour and box) plus the entropy coming from our uncertainty over the position of a particle within a given box, which is constant.
Hsys=n[−pb1lnpb1−pr1lnpr1−pb2lnpr2+Hbox]
Thanks to our previous calculation, we can write all of these in terms of a single probability p, which we will choose to be pr1+pb2=1−pb1=2pr1=2pb2.
Hsys=n[−p2lnp2−p2lnp2−(1−p)ln(1−p)+Hbox]
Hsys=n[−plnp−(1−p)ln(1−p)+pln2+Hbox]
What we actually care about, as it turns out, is the derivative of entropy with respect to this parameter:
dHsysdp=n[−lnp+ln(1−p)+ln2]
Which is almost the same as the derivative of entropy in a simple two-state system where p is the probability of being in one of these states. In a way, we do have a two state system, where the states are [Blue,1],[[Red,1]∨[Blue,2]]. The only difference is that for particles in the state [[Red,1]∨[Blue,2]], there is an extra bit of uncertainty over position, hence the ln2 term. We can think of this system in two equivalent ways:
A three-state system with states [Blue,1],[Red,1],[Blue,2], where P(Red,1)=P(Blue,2). The entropy is just the entropy calculated over all three states.
A two-state system with states [Blue,1],[[Red,1]∨[Blue,2]] with no restrictions on the distribution other that p. The entropy is calculated over two states but an extra pln2 term is added to correct for the intrinsically higher entropy of state 2.
The second one is most commonly done in stat mech, when we have very complex systems. In fact, we have already seen this concept in the last post when we considered the entropy of a particle distributed over two boxes of different size.
Derivatives In Terms of Red-ness
Our previous calculation found the derivative Hsys in terms of p. This is a bad choice, since we don’t want to think of changing p directly. Instead we have to think in terms of changing nr1∨b2 which for short-hand I’ll call m as before. Because m=p×n, and n is constant, we can just divide by n to get the derivative of Hsys in terms of m.
dHsysdm=−lnp+ln(1−p)+ln2
This is even better, since it doesn’t depend on m at all! Now we must consider the external entropy Hext. Lets say we have next→∞ particles outside the box, which have a probability q of being red. If Hout is the entropy coming from the entire exterior of the box, and this is equal between red and blue external particles, we can find the derivative of Hext with respect to q quite simply:
dHextdq=next[−lnq+ln(1−q)]
From which follows the derivative in terms of the number of red particles outside the box mext=next×q:
dHextdmext=−lnq+ln(1−q)
The important part here is that, as next→∞, the derivative of Hext with respect to mext remains constant. For sufficiently large next, we can totally ignore the change in q when mext changes. Finally, if we assume that m+mext is a constant, we can write down the following derivative:
dHtotdm=dHsysdm−dHextdmext=−lnp+ln(1−p)+ln2+lnq−ln(1−q)
If we want to find the maximum entropy, i.e. forget as much as possible then we want to set this to zero, which gives the following relation:
lnp−ln(1−p)=ln2+lnq−ln(1−q)
lnp1−p=ln2q1−q
p1−p=2q1−q
p=2qq+1
Normally it’s not so easy to solve the relevant equation in terms of obvious parameters of the external world like q. In most cases, we work out the solution in terms of the derivative:
dHextdmext
Which is often easier to measure than you might think! But you’ll have to wait for the next post for that.
Conclusions
When we induce a conserved quantity in our system, we must specify how much of that quantity is present.
When we look at a globally conserved quantity, we must instead specify the derivative of the total external entropy Hext with respect to the total external amount of that quantity.
We can switch between more micro-level views of individual states, and more macro-level views of states with “intrinsic” entropy.