You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.