Interesting question! I’d say that you could refer to the possibilities as possibilities, e.g. in a debate over whether a particular past would in fact have led to the present, but to speak of the ‘actual past’ might make no sense because you couldn’t get there from there… no, actually, I take that back, you might be able to get there via simplicity. I.e. if there’s only one past that would have evolved from a simply-tiled start state for the automaton.
But does it really matter? If both states are possible, why not just say “my past contains ambiguity?”
With quantum mechanics, even though the “future” itself (as a unified wavefunction) evolves forward as a whole, the bit-that-makes-up-this-pseudofactor-of-me has multiple possible outcomes. We live with future ambiguity just fine, and quantum mechanics forces us to say “both experienced futures must be dealt with probabilistically”. Even though the mechanism is different, what’s wrong with treating the “past” as containing the same level of branching as the future?
EDIT: From a purely global, causal perspective, I understand the desire to be able to say, “both X and Y can directly cause Z, but in point of fact, this time it was Y.” But you’re inside, so you don’t get to operate as a thing that can distinguish between X and Y, and this isn’t necessarily an “orbital teapot” level of implausibility. If configuration Y is 10^4 more likely as a ‘starting’ configuration than configuration X according to your understanding of how starting configurations are chosen, then sure—go ahead and assert that it was (or may-as-well-have-been) configuration Y that was your “actual” past—but if the configuration probabilities are more like 70%/30%, or if your confidence that you understand how starting configurations are chosen is low enough, then it may be better to just swallow the ambiguity.
EDIT2: Coming from a completely different angle, why assert that one or the other “happened”, rather than looking at it as a kind of path-integral? It’s a celular automaton, instead of a quantum wave-function, which means that you’re summing discrete paths instead of integrating infinitesimals, but it seems (at first glance) that the reasoning is equally applicable.
If both states are possible, why not just say “my past contains ambiguity?”
Ambiguity it is, but we usually want to know the probabilities. If I tell you that whether you win or not win a lottery tomorrow is “ambiguous”, you would not be satisfied with such answer, and you would ask how much likely it is to win. And this question somehow makes sense even if the lottery is decided by a quantum event, so you know that each future happens in some Everett branch.
Similarly, in addition to knowing that the past is ambiguous, we should ask how likely are the individual pasts. In our universe you would want to know how the pasts P1 and P2 are likely to become NOW. The Conway’s Game of Life does not branch time-forward, so if you have two valid pasts, their probabilities of becoming NOW are 100% each.
But that is only a part of the equation. The other part are the prior probabilities of P1 and P2. Even if both P1 and P2 deterministically evolve to NOW, their prior probabilities influence how likely did NOW really evolve from each of them.
I am not sure what would be the equivalent of Solomonoff induction for the Conway’s Game of Life. Starting with a finite number of “on” cells, where each additional “on” cell decreases the prior probability of the configuration? Starting with an infinite plane where each cell has a 50% probability to be “on”? Or an infinite plane with each cell having a p probability of being “on”, where p has the property that after one step in such plane, the average ratio of “on” cells remain the same (the p being kind-of-eigenvalue of the rules)?
But the general idea is that if P1 is somehow “generally more likely to happen” than P2, we should consider P1 to be more likely the past of NOW than P2, even if both P1 and P2 deterministically evolve to NOW.
In the Game of Life, a single live cell with no neighbours will become a dead cell in the next step. Therefore, any possible present state that has at least one past state has an infinite number of one-step-back states (which differ from the one state merely in having one or more neighbourless cells at random locations, far enough from anything else to have no effect).
Some of these one-step-back states may end up having evolved from simpler starting tilesets than the one with no vanishing cells.
no, actually, I take that back, you might be able to get there via simplicity. I.e. if there’s only one past that would have evolved from a simply-tiled start state for the automaton.
The simplest start state might actually be a program that simulates the evolution of every possible starting state in parallel. If time and space are unbounded and an entity is more complex than the shortest such program then it is more likely that the entity is the result of the program and not the result of evolving from another random state.
Interesting question! I’d say that you could refer to the possibilities as possibilities, e.g. in a debate over whether a particular past would in fact have led to the present, but to speak of the ‘actual past’ might make no sense because you couldn’t get there from there… no, actually, I take that back, you might be able to get there via simplicity. I.e. if there’s only one past that would have evolved from a simply-tiled start state for the automaton.
But does it really matter? If both states are possible, why not just say “my past contains ambiguity?”
With quantum mechanics, even though the “future” itself (as a unified wavefunction) evolves forward as a whole, the bit-that-makes-up-this-pseudofactor-of-me has multiple possible outcomes. We live with future ambiguity just fine, and quantum mechanics forces us to say “both experienced futures must be dealt with probabilistically”. Even though the mechanism is different, what’s wrong with treating the “past” as containing the same level of branching as the future?
EDIT: From a purely global, causal perspective, I understand the desire to be able to say, “both X and Y can directly cause Z, but in point of fact, this time it was Y.” But you’re inside, so you don’t get to operate as a thing that can distinguish between X and Y, and this isn’t necessarily an “orbital teapot” level of implausibility. If configuration Y is 10^4 more likely as a ‘starting’ configuration than configuration X according to your understanding of how starting configurations are chosen, then sure—go ahead and assert that it was (or may-as-well-have-been) configuration Y that was your “actual” past—but if the configuration probabilities are more like 70%/30%, or if your confidence that you understand how starting configurations are chosen is low enough, then it may be better to just swallow the ambiguity.
EDIT2: Coming from a completely different angle, why assert that one or the other “happened”, rather than looking at it as a kind of path-integral? It’s a celular automaton, instead of a quantum wave-function, which means that you’re summing discrete paths instead of integrating infinitesimals, but it seems (at first glance) that the reasoning is equally applicable.
Ambiguity it is, but we usually want to know the probabilities. If I tell you that whether you win or not win a lottery tomorrow is “ambiguous”, you would not be satisfied with such answer, and you would ask how much likely it is to win. And this question somehow makes sense even if the lottery is decided by a quantum event, so you know that each future happens in some Everett branch.
Similarly, in addition to knowing that the past is ambiguous, we should ask how likely are the individual pasts. In our universe you would want to know how the pasts P1 and P2 are likely to become NOW. The Conway’s Game of Life does not branch time-forward, so if you have two valid pasts, their probabilities of becoming NOW are 100% each.
But that is only a part of the equation. The other part are the prior probabilities of P1 and P2. Even if both P1 and P2 deterministically evolve to NOW, their prior probabilities influence how likely did NOW really evolve from each of them.
I am not sure what would be the equivalent of Solomonoff induction for the Conway’s Game of Life. Starting with a finite number of “on” cells, where each additional “on” cell decreases the prior probability of the configuration? Starting with an infinite plane where each cell has a 50% probability to be “on”? Or an infinite plane with each cell having a p probability of being “on”, where p has the property that after one step in such plane, the average ratio of “on” cells remain the same (the p being kind-of-eigenvalue of the rules)?
But the general idea is that if P1 is somehow “generally more likely to happen” than P2, we should consider P1 to be more likely the past of NOW than P2, even if both P1 and P2 deterministically evolve to NOW.
In the Game of Life, a single live cell with no neighbours will become a dead cell in the next step. Therefore, any possible present state that has at least one past state has an infinite number of one-step-back states (which differ from the one state merely in having one or more neighbourless cells at random locations, far enough from anything else to have no effect).
Some of these one-step-back states may end up having evolved from simpler starting tilesets than the one with no vanishing cells.
The simplest start state might actually be a program that simulates the evolution of every possible starting state in parallel. If time and space are unbounded and an entity is more complex than the shortest such program then it is more likely that the entity is the result of the program and not the result of evolving from another random state.