While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up.
In your game of life example, one could store larger than 3x3 grids and get the complete mapping from states to next states, reusing them to produce more efficient computations. The full table of state → next state permits compression, bottoming out in a minimal generating set for next states. One can run the rules in reverse and generate all of the possible initial states that lead to any state without having to compute bottom-up for every state.
The laws of physics could preclude our perfectly pinpointing which universe is ours via fine measurement, but I don’t see anything precluding enough observations of large states and awareness of the dynamics in order to get at a proof that some particular particle configuration gave rise to our universe (e.g. the other starting states lead to planets where everything is on a cob, and we can see that no such world exists here). For things that depend on low-level phenomena, the question is whether or not it is possible to ‘flatten’ the computational problem by piecing together smaller solved systems cheaply enough to predict large states with sufficient accuracy.
I see no rule that says we can’t determine future states of our universe using this method, far in advance of the universe getting there. One may be able to know when a star will go supernova without the answer failing due to only having represented a part of an entangled particle configuration, and high level observations could be sufficient to distinguish our world from others.
The anthropically concerning question is whether or not it’s possible, from any places that exist, to simulate a full particle configuration for an entire universe minimally satisfying copy of our experience such that all our observations are indistinguishable from the original, but not whether or not there is a way to do so faster than playing out the dynamics—if it takes 10 lifetimes of our universe to do it, but this was feasible in an initially ‘causally separate’ world (there may not be such a thing if everything could be said to follow from an initial cause, but the sense in which I mean this still applies), nothing would depend on the actual rate of our universe’s dynamics playing out; observers in our reference class could experience simulator-induced shifts in expectation independent of when it was done.
We’re in a reference class with all our simulacra independent of when we’re simulated, due to not having gained information which distinguishes which one we would be. Before or after we exist, simulating us adds to the same state from our perspective, where that state is a time-independent sum of all times we’ve ever occurred. If you are simulated only after our universe ends, you do not get information about this unless the simulator induces distinguishing information, and it is the same as if they did so before our universe arose.
Thanks for your comment. §1 : Okay, you mean something like this, right? I think you’re right, maybe the game of life wasn’t the best example then.
§2 : I think I agree, but I can’t really get why would one need to know which configuration gave rise to our universe.
§ 3 : I’m not sure if i’m answering adequately, but I meant many chaotic phenomena, which probably include stars transforming into supernovae. In that case we arguably can’t precisely predict the time of the transformation without fully computing “low level phenomena”. But still I can’t see why we would need to “distinguish our world from others”.
For now I’m not sure to see where you’re going after that, i’m sorry ! Maybe i’ll think about it again and get it later.
I can’t really get why would one need to know which configuration gave rise to our universe.
This was with respect to feasibility of locating our specific universe for simulation at full fidelity. It’s unclear if it’s feasible, but if it were, that could entail a way to get at an entire future state of our universe.
I can’t see why we would need to “distinguish our world from others”
This was only a point about useful macroscopic predictions any significant distance in the future; prediction relies on information which distinguishes which world we’re in.
For now I’m not sure to see where you’re going after that, i’m sorry ! Maybe i’ll think about it again and get it later.
I wouldn’t worry about that, I was mostly adding some relevant details rather than necessarily arguing against your points. The point about game of life was suggesting that it permits compression, which for me makes it harder to determine if it demonstrates the same sort of reducibility that quantum states might importantly have (or whatever the lowest level is which still has important degrees of freedom wrt prediction). The only accounts of this I’ve encountered suggest there is some important irreducibility in QM, but I’m not yet convinced there isn’t a suitable form of compression at some level for the purpose of AC.
Both macroscopic prediction and AC seem to depend on the feasibility of ‘flattening up’ from quantum states sufficiently cheaply that a pre-computed structure can support accurate macroscopic prediction or AC—if it is feasible, it stands to reason that it would allow capture to be cheap.
There is also an argument I didn’t go into which suggests that observers might typically find themselves in places that are hard / infeasible to capture for intentional reasons: a certain sort of simulator might be said to fully own anything it doesn’t have to share control of, which is suggestive of those states being higher value. This is a point in favor of irreducibility as a potential sim-blocker for simulators after the first if it’s targetable in the first place. For example, it might be possible to condition the small states a simulator is working with on large-state phenomena as a cryptographic sim-blocker. This then feeds into considerations about acausal trade among agents which do or do not use cryptographic sim-blockers due to feasibility.
I don’t know of anything working against the conclusion you’re entertaining, the overall argument is good. I expect an argument from QM and computational complexity could inform my uncertainty about whether the compression permitted in QM entails feasibility of computing states faster than physics.
In your game of life example, one could store larger than 3x3 grids and get the complete mapping from states to next states, reusing them to produce more efficient computations. The full table of state → next state permits compression, bottoming out in a minimal generating set for next states. One can run the rules in reverse and generate all of the possible initial states that lead to any state without having to compute bottom-up for every state.
The laws of physics could preclude our perfectly pinpointing which universe is ours via fine measurement, but I don’t see anything precluding enough observations of large states and awareness of the dynamics in order to get at a proof that some particular particle configuration gave rise to our universe (e.g. the other starting states lead to planets where everything is on a cob, and we can see that no such world exists here). For things that depend on low-level phenomena, the question is whether or not it is possible to ‘flatten’ the computational problem by piecing together smaller solved systems cheaply enough to predict large states with sufficient accuracy.
I see no rule that says we can’t determine future states of our universe using this method, far in advance of the universe getting there. One may be able to know when a star will go supernova without the answer failing due to only having represented a part of an entangled particle configuration, and high level observations could be sufficient to distinguish our world from others.
The anthropically concerning question is whether or not it’s possible, from any places that exist, to simulate a
full particle configuration for an entire universeminimally satisfying copy of our experience such that all our observations are indistinguishable from the original, but not whether or not there is a way to do so faster than playing out the dynamics—if it takes 10 lifetimes of our universe to do it, but this was feasible in an initially ‘causally separate’ world (there may not be such a thing if everything could be said to follow from an initial cause, but the sense in which I mean this still applies), nothing would depend on the actual rate of our universe’s dynamics playing out; observers in our reference class could experience simulator-induced shifts in expectation independent of when it was done.We’re in a reference class with all our simulacra independent of when we’re simulated, due to not having gained information which distinguishes which one we would be. Before or after we exist, simulating us adds to the same state from our perspective, where that state is a time-independent sum of all times we’ve ever occurred. If you are simulated only after our universe ends, you do not get information about this unless the simulator induces distinguishing information, and it is the same as if they did so before our universe arose.
Thanks for your comment. §1 : Okay, you mean something like this, right? I think you’re right, maybe the game of life wasn’t the best example then.
§2 : I think I agree, but I can’t really get why would one need to know which configuration gave rise to our universe.
§ 3 : I’m not sure if i’m answering adequately, but I meant many chaotic phenomena, which probably include stars transforming into supernovae. In that case we arguably can’t precisely predict the time of the transformation without fully computing “low level phenomena”. But still I can’t see why we would need to “distinguish our world from others”.
For now I’m not sure to see where you’re going after that, i’m sorry ! Maybe i’ll think about it again and get it later.
This was with respect to feasibility of locating our specific universe for simulation at full fidelity. It’s unclear if it’s feasible, but if it were, that could entail a way to get at an entire future state of our universe.
This was only a point about useful macroscopic predictions any significant distance in the future; prediction relies on information which distinguishes which world we’re in.
I wouldn’t worry about that, I was mostly adding some relevant details rather than necessarily arguing against your points. The point about game of life was suggesting that it permits compression, which for me makes it harder to determine if it demonstrates the same sort of reducibility that quantum states might importantly have (or whatever the lowest level is which still has important degrees of freedom wrt prediction). The only accounts of this I’ve encountered suggest there is some important irreducibility in QM, but I’m not yet convinced there isn’t a suitable form of compression at some level for the purpose of AC.
Both macroscopic prediction and AC seem to depend on the feasibility of ‘flattening up’ from quantum states sufficiently cheaply that a pre-computed structure can support accurate macroscopic prediction or AC—if it is feasible, it stands to reason that it would allow capture to be cheap.
There is also an argument I didn’t go into which suggests that observers might typically find themselves in places that are hard / infeasible to capture for intentional reasons: a certain sort of simulator might be said to fully own anything it doesn’t have to share control of, which is suggestive of those states being higher value. This is a point in favor of irreducibility as a potential sim-blocker for simulators after the first if it’s targetable in the first place. For example, it might be possible to condition the small states a simulator is working with on large-state phenomena as a cryptographic sim-blocker. This then feeds into considerations about acausal trade among agents which do or do not use cryptographic sim-blockers due to feasibility.
I don’t know of anything working against the conclusion you’re entertaining, the overall argument is good. I expect an argument from QM and computational complexity could inform my uncertainty about whether the compression permitted in QM entails feasibility of computing states faster than physics.