Our universe appears to be a causal universe. By this, I mean that it appears to be characterized by having some particular ‘starting state’, and evolving forwards as a dynamical process according to some causal laws.
Much of what you say depends on on strict determinism rather than causality. Causality is a superset of determinism.
Much of what you say depends on on strict determinism rather than causality.
This is definitely where the most likely loophole in my argument is, especially when combined with the quantum/chaos thing too. I’d be curious to know whether it is a serious loophole that might break the argument, or if it’s not so important.
Let’s consider my argument against causality running backwards; I said that this seemed unlikely since entropy is increasing. But what if the nondeterminism actually has a bias towards entropy-decreasing events? In that case, it seems like we could definitely expect a high-entropy universe to evolve into a low-entropy one.
To make this more concrete, imagine that we take a “heat death of the universe” state, and then add some imperceptible Gaussian noise to it, leading to a distribution of states possible states. Now imagine that we evolve this forwards for a short while; this is going to lead to a wider distribution of states, with some that are lower entropy than the initial state. Suppose we then perform a Bayesian update on the entropy being low; this leaves us with a narrower distribution of states. And imagine that we then keep repeating this, constantly evolving the distribution forwards and updating on low entropy, to generate an entire trajectory for the universe.
Would this method be well-defined? I’m not sure, maybe one eventually does an update on something that is probability 0. (Also, it would be challenging to define ‘entropy’ properly here...) But assuming it would be well-defined, would the resulting trajectory look similar to a time-reversed version of our universe? My suspicion is that the answer to this is “no”, but I’m not sure. It seems like this could generate all sorts of weird information-based shenanigans, like there’s nothing preventing a complex pattern from arising twice independently.
. But what if the nondeterminism actually has a bias towards entropy-decreasing events?
What if determinism has a bias towards entropy-decreasing events? There is no contradiction in the idea, it just means that the causal, entropic, and temporal arrows don’t necessarily align.
Indeterminism kind of has to increase entropy because there are more high entropic states than low entropic ones, so a random walk starting at a low entropy state will see increasing entropy.
But that’s dependent on a low entropy starting state.
The low entropy starting state needs explaining, and the fact that second law holds (given sone side conditions) under classical physics needs to be explained. I suspect that classical chaos is what allows the deterministic and stochastic accounts to coincide .
What if determinism has a bias towards entropy-decreasing events? There is no contradiction in the idea, it just means that the causal, entropic, and temporal arrows don’t necessarily align.
That’s the idea I try to address in my above comment. In our universe, if the same pattern appears twice, then presumably those two appearances have a common cause. But if our universe was just the time-reversed version of an entropy-decreasing universe, it doesn’t seem to me like this would need to be the case.
Indeterminism kind of has to increase entropy because there are more high entropic states than low entropic ones, so a random walk starting at a low entropy state will see increasing entropy.
That’s the idea I try to address in my above comment. In our universe, if the same pattern appears twice, then presumably those two appearances have a common cause. But if our universe was just the time-reversed version of an entropy-decreasing universe, it doesn’t seem to me like this would need to be the case
When you run dynamics ordinarily in a low-entropy environment, it’s perfectly possible for some pattern to generate two copies of itself, which then disintegrate: X → X X → ? ?
If you then flip the direction of time, you get ? ? → X X → X; that is, X appearing twice independently and then merging.
I don’t think there’s anything about the “bias towards lower entropy” law that would prevent the X → X X → ? ? pattern (though maybe I’m missing something), so if you reverse that, you’d get a lot of ? ? → X X → X patterns.
Which measure?
Whichever measure you use to quantify entropy.
If you are not measure-preserving under any measure, then you can simply destroy information to get rid of entropy. E.g. the function f(x) = 0 destroys all information, getting rid of all entropy.
Much of what you say depends on on strict determinism rather than causality. Causality is a superset of determinism.
This is definitely where the most likely loophole in my argument is, especially when combined with the quantum/chaos thing too. I’d be curious to know whether it is a serious loophole that might break the argument, or if it’s not so important.
Let’s consider my argument against causality running backwards; I said that this seemed unlikely since entropy is increasing. But what if the nondeterminism actually has a bias towards entropy-decreasing events? In that case, it seems like we could definitely expect a high-entropy universe to evolve into a low-entropy one.
To make this more concrete, imagine that we take a “heat death of the universe” state, and then add some imperceptible Gaussian noise to it, leading to a distribution of states possible states. Now imagine that we evolve this forwards for a short while; this is going to lead to a wider distribution of states, with some that are lower entropy than the initial state. Suppose we then perform a Bayesian update on the entropy being low; this leaves us with a narrower distribution of states. And imagine that we then keep repeating this, constantly evolving the distribution forwards and updating on low entropy, to generate an entire trajectory for the universe.
Would this method be well-defined? I’m not sure, maybe one eventually does an update on something that is probability 0. (Also, it would be challenging to define ‘entropy’ properly here...) But assuming it would be well-defined, would the resulting trajectory look similar to a time-reversed version of our universe? My suspicion is that the answer to this is “no”, but I’m not sure. It seems like this could generate all sorts of weird information-based shenanigans, like there’s nothing preventing a complex pattern from arising twice independently.
What if determinism has a bias towards entropy-decreasing events? There is no contradiction in the idea, it just means that the causal, entropic, and temporal arrows don’t necessarily align.
Indeterminism kind of has to increase entropy because there are more high entropic states than low entropic ones, so a random walk starting at a low entropy state will see increasing entropy.
But that’s dependent on a low entropy starting state.
The low entropy starting state needs explaining, and the fact that second law holds (given sone side conditions) under classical physics needs to be explained. I suspect that classical chaos is what allows the deterministic and stochastic accounts to coincide .
That’s the idea I try to address in my above comment. In our universe, if the same pattern appears twice, then presumably those two appearances have a common cause. But if our universe was just the time-reversed version of an entropy-decreasing universe, it doesn’t seem to me like this would need to be the case.
Only if you are measure-preserving.
I don’t see why.
Which measure?
When you run dynamics ordinarily in a low-entropy environment, it’s perfectly possible for some pattern to generate two copies of itself, which then disintegrate: X → X X → ? ?
If you then flip the direction of time, you get ? ? → X X → X; that is, X appearing twice independently and then merging.
I don’t think there’s anything about the “bias towards lower entropy” law that would prevent the X → X X → ? ? pattern (though maybe I’m missing something), so if you reverse that, you’d get a lot of ? ? → X X → X patterns.
Whichever measure you use to quantify entropy.
If you are not measure-preserving under any measure, then you can simply destroy information to get rid of entropy. E.g. the function f(x) = 0 destroys all information, getting rid of all entropy.