Thanks for the response! Part of my confusion went away, but some still remains.
In the game of life example, couldn’t there be another factorization where a later step is “before” an earlier one? (Because the game is non-reversible and later steps contain less and less information.) And if we replace it with a reversible game, don’t we run into the problem that the final state is just as good a factorization as the initial?
I think your argument about entropy might have the same problem. Since classical physics is reversible, if we build something like a heat engine in your model, all randomness will be already contained in the initial state. Total “entropy” will stay constant, instead of growing as it’s supposed to, and the final state will be just as good a factorization as the initial. Usually in physics you get time (and I suspect also causality) by pointing to a low probability macrostate and saying “this is the start”, but your model doesn’t talk about macrostates yet, so I’m not sure how much it can capture time or causality.
That said, I like really like how your model talks only about information, without postulating any magical arrows. Maybe it has a natural way to recover macrostates, and from them, time?
Wait, I misunderstood, I was just thinking about the game of life combinatorially, and I think you were thinking about temporal inference from statistics. The reversible cellular automaton story is a lot nicer than you’d think.
if you take a general reversible cellular automaton (critters for concreteness), and have a distribution over computations in general position in which initial conditions cells are independent, the cells may not be independent at future time steps.
If all of the initial probabilities are 1⁄2, you will stay in the uniform distribution, but if the probabilities are in general position, things will change, and time 0 will be special because of the independence between cells.
There will be other events at later times that will be independent, but those later time events will just represent “what was the state at time 0.”
For a concrete example consider the reversible cellular automaton that just has 2 cells, X and Y, and each time step it keeps X constant and replaces Y with X xor Y.
Wait, can you describe the temporal inference in more detail? Maybe that’s where I’m confused. I’m imagining something like this:
Check which variables look uncorrelated
Assume they are orthogonal
From that orthogonality database, prove “before” relationships
Which runs into the problem that if you let a thermodynamical system run for a long time, it becomes a “soup” where nothing is obviously correlated to anything else. Basically the final state would say “hey, I contain a whole lot of orthogonal variables!” and that would stop you from proving any reasonable “before” relationships. What am I missing?
I think that you are pointing out that you might get a bunch of false positives in your step 1 after you let a thermodynamical system run for a long time, but they are are only approximate false positives.
I think my model has macro states. In game of life, if you take the entire grid at time t, that will have full history regardless of t. It is only when you look at the macro states (individual cells) that my time increases with game of life time.
As for entropy, here is a cute observation (with unclear connection to my framework): whenever you take two independent coin flips (with probabilities not 0,1, or 1⁄2), their xor will always be high entropy than either of the individual coin flips.
Thanks for the response! Part of my confusion went away, but some still remains.
In the game of life example, couldn’t there be another factorization where a later step is “before” an earlier one? (Because the game is non-reversible and later steps contain less and less information.) And if we replace it with a reversible game, don’t we run into the problem that the final state is just as good a factorization as the initial?
Yep, there is an obnoxious number of factorizations of a large game of life computation, and they all give different definitions of “before.”
I think your argument about entropy might have the same problem. Since classical physics is reversible, if we build something like a heat engine in your model, all randomness will be already contained in the initial state. Total “entropy” will stay constant, instead of growing as it’s supposed to, and the final state will be just as good a factorization as the initial. Usually in physics you get time (and I suspect also causality) by pointing to a low probability macrostate and saying “this is the start”, but your model doesn’t talk about macrostates yet, so I’m not sure how much it can capture time or causality.
That said, I like really like how your model talks only about information, without postulating any magical arrows. Maybe it has a natural way to recover macrostates, and from them, time?
Wait, I misunderstood, I was just thinking about the game of life combinatorially, and I think you were thinking about temporal inference from statistics. The reversible cellular automaton story is a lot nicer than you’d think.
if you take a general reversible cellular automaton (critters for concreteness), and have a distribution over computations in general position in which initial conditions cells are independent, the cells may not be independent at future time steps.
If all of the initial probabilities are 1⁄2, you will stay in the uniform distribution, but if the probabilities are in general position, things will change, and time 0 will be special because of the independence between cells.
There will be other events at later times that will be independent, but those later time events will just represent “what was the state at time 0.”
For a concrete example consider the reversible cellular automaton that just has 2 cells, X and Y, and each time step it keeps X constant and replaces Y with X xor Y.
Wait, can you describe the temporal inference in more detail? Maybe that’s where I’m confused. I’m imagining something like this:
Check which variables look uncorrelated
Assume they are orthogonal
From that orthogonality database, prove “before” relationships
Which runs into the problem that if you let a thermodynamical system run for a long time, it becomes a “soup” where nothing is obviously correlated to anything else. Basically the final state would say “hey, I contain a whole lot of orthogonal variables!” and that would stop you from proving any reasonable “before” relationships. What am I missing?
I think that you are pointing out that you might get a bunch of false positives in your step 1 after you let a thermodynamical system run for a long time, but they are are only approximate false positives.
I think my model has macro states. In game of life, if you take the entire grid at time t, that will have full history regardless of t. It is only when you look at the macro states (individual cells) that my time increases with game of life time.
As for entropy, here is a cute observation (with unclear connection to my framework): whenever you take two independent coin flips (with probabilities not 0,1, or 1⁄2), their xor will always be high entropy than either of the individual coin flips.