. But what if the nondeterminism actually has a bias towards entropy-decreasing events?
What if determinism has a bias towards entropy-decreasing events? There is no contradiction in the idea, it just means that the causal, entropic, and temporal arrows don’t necessarily align.
Indeterminism kind of has to increase entropy because there are more high entropic states than low entropic ones, so a random walk starting at a low entropy state will see increasing entropy.
But that’s dependent on a low entropy starting state.
The low entropy starting state needs explaining, and the fact that second law holds (given sone side conditions) under classical physics needs to be explained. I suspect that classical chaos is what allows the deterministic and stochastic accounts to coincide .
What if determinism has a bias towards entropy-decreasing events? There is no contradiction in the idea, it just means that the causal, entropic, and temporal arrows don’t necessarily align.
That’s the idea I try to address in my above comment. In our universe, if the same pattern appears twice, then presumably those two appearances have a common cause. But if our universe was just the time-reversed version of an entropy-decreasing universe, it doesn’t seem to me like this would need to be the case.
Indeterminism kind of has to increase entropy because there are more high entropic states than low entropic ones, so a random walk starting at a low entropy state will see increasing entropy.
That’s the idea I try to address in my above comment. In our universe, if the same pattern appears twice, then presumably those two appearances have a common cause. But if our universe was just the time-reversed version of an entropy-decreasing universe, it doesn’t seem to me like this would need to be the case
When you run dynamics ordinarily in a low-entropy environment, it’s perfectly possible for some pattern to generate two copies of itself, which then disintegrate: X → X X → ? ?
If you then flip the direction of time, you get ? ? → X X → X; that is, X appearing twice independently and then merging.
I don’t think there’s anything about the “bias towards lower entropy” law that would prevent the X → X X → ? ? pattern (though maybe I’m missing something), so if you reverse that, you’d get a lot of ? ? → X X → X patterns.
Which measure?
Whichever measure you use to quantify entropy.
If you are not measure-preserving under any measure, then you can simply destroy information to get rid of entropy. E.g. the function f(x) = 0 destroys all information, getting rid of all entropy.
What if determinism has a bias towards entropy-decreasing events? There is no contradiction in the idea, it just means that the causal, entropic, and temporal arrows don’t necessarily align.
Indeterminism kind of has to increase entropy because there are more high entropic states than low entropic ones, so a random walk starting at a low entropy state will see increasing entropy.
But that’s dependent on a low entropy starting state.
The low entropy starting state needs explaining, and the fact that second law holds (given sone side conditions) under classical physics needs to be explained. I suspect that classical chaos is what allows the deterministic and stochastic accounts to coincide .
That’s the idea I try to address in my above comment. In our universe, if the same pattern appears twice, then presumably those two appearances have a common cause. But if our universe was just the time-reversed version of an entropy-decreasing universe, it doesn’t seem to me like this would need to be the case.
Only if you are measure-preserving.
I don’t see why.
Which measure?
When you run dynamics ordinarily in a low-entropy environment, it’s perfectly possible for some pattern to generate two copies of itself, which then disintegrate: X → X X → ? ?
If you then flip the direction of time, you get ? ? → X X → X; that is, X appearing twice independently and then merging.
I don’t think there’s anything about the “bias towards lower entropy” law that would prevent the X → X X → ? ? pattern (though maybe I’m missing something), so if you reverse that, you’d get a lot of ? ? → X X → X patterns.
Whichever measure you use to quantify entropy.
If you are not measure-preserving under any measure, then you can simply destroy information to get rid of entropy. E.g. the function f(x) = 0 destroys all information, getting rid of all entropy.