Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien’s sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
This is where you tried to define the entropy of diffuse ink to be lower.
The highest entropy phase state is the one in which the constraints on each variable are least restrictive. That means that the state where each ink can be in any position within the glass is (other things being equal) higher entropy than a state where each ink is constrained to be in a small area.
Entropy is a physical property similar to temperature, in that at a certain level it becomes momentum. If you view a closed Carnot cycle, you will note that the source loses heat, and the sink gains heat, and that the source must be hotter than the sink. There being no method by which the coldest sink can be made colder, nor by which the total energy can be increased, the gap can only decrease.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.
This is where you tried to define the entropy of diffuse ink to be lower. The highest entropy phase state is the one in which the constraints on each variable are least restrictive. That means that the state where each ink can be in any position within the glass is (other things being equal) higher entropy than a state where each ink is constrained to be in a small area.
Entropy is a physical property similar to temperature, in that at a certain level it becomes momentum. If you view a closed Carnot cycle, you will note that the source loses heat, and the sink gains heat, and that the source must be hotter than the sink. There being no method by which the coldest sink can be made colder, nor by which the total energy can be increased, the gap can only decrease.
You’re applying intuitions garnered from classical thermodynamics, but thermodynamics is a phenomenological theory entirely superseded by statistical mechanics. It’s sort of like applying Newtonian intuitions to resist the implications of relativity.
Yes, in classical thermodynamics entropy is a state function—a property of an equilibrium state just like its volume or magnetization—but we now know (thanks to stat. mech.) that this is not the best way to think about entropy. Entropy is actually a property of probability distributions over phase space, and if you believe that probability is in the mind, it’s hard to deny that entropy is in some sense an agent-relative notion. If probability is in the mind and entropy depends on probability, then entropy is at least partially in the mind as well.
Still, the agent-relativity can be seen in thermodynamics as well, without having to adopt the probabilistic conception of entropy. The First Law tells us that any change in the internal energy of the system is a sum of the heat transferred to the system and the work on the system. But how do we distinguish between these two forms of energy transfer? Well, heat is energy transferred through macroscopically uncontrollable degrees of freedom, while work is energy transferred through macroscopically controllable degrees of freedom. Whether a particular degree of freedom is macroscopically controllable is an agent-relative notion. Here is the fundamental equation of thermodynamics:
dE = T dS + F1 dX1 + F2 dX2 + F3 dX3 + …
The Fs and Xs here are macroscopic “force” and “displacement” terms, representing different ways we can do mechanical work on the system (or extract work from the system) by adjusting its macroscopic constraints. Particular examples of these force-displacement pairs are pressure-volume (usually this is the only one considered in introductory courses on thermodynamics), electric field-polarization, tension-length. These work terms—the controllable degrees of freedom—are chosen based on our ability to interact with the system, which in turn depends on the kinds of creatures we are. Any part of the change in energy that is not explicable by the work terms is attributed to the heat term—T dS—and the S here is of course thermodynamic entropy. So the entropy comes from the heat term, which depends on the work terms, which in turn depend on our capacities for macroscopic intervention on the system. Aliens with radically different capacities could have different work terms and hence calculate a different thermodynamic entropy. [ETA: And of course the thermodynamic state space is defined by the work terms, which explains how entropy can be a state function and still be an agent-dependent quantity.]
The work we can extract from a system depends on our knowledge of the system. This is a point that has been understood for a while. Read this post on the Szilard engine for a nice illustration of how our knowledge about a system can effect the amount of work we can get it to do. But of course if extractable work depends on knowledge, then the heat dissipated by the system must also depend on our knowledge, since heat is just the complement of work (it is that portion of the energy change that cannot be accounted for by work done). And if the heat dissipated is a function of our knowledge, so is the entropy. If our capacities were different—if we could have more or different knowledge about the system—our judgment of its entropy would differ.
The maximum work you can extract from a system does not depend on knowledge- greater knowledge may let you get work done more efficiently, and if you operate on the scale where raising an electron to a higher energy state is ‘useful work’ and not ‘heat’, then you can minimize the heat term.
But you can’t have perfect knowledge about the system, because matter cannot be perfectly described. If the state of the box becomes more knowable than it was (per Heisenberg uncertainty), then the state of outside the box must become less knowable than it was. You could measure the knowability of a system by determining how may states are microscopically indistinguishable from the observed state: As energies of the particles equalize, (such that the number of possible plank-unit positions is more equally divided between all the particles), there are more total states which are indistinguishable (since the total number of possible states is equal to the product of the number of possible states for each particle, and energy is conserved.)
if you can show where there are spontaneous interactions which result in two particles having a greater difference in total energy after they interact than they had before they interact, feel free to win every Nobel prize ever.