I can’t really get why would one need to know which configuration gave rise to our universe.
This was with respect to feasibility of locating our specific universe for simulation at full fidelity. It’s unclear if it’s feasible, but if it were, that could entail a way to get at an entire future state of our universe.
I can’t see why we would need to “distinguish our world from others”
This was only a point about useful macroscopic predictions any significant distance in the future; prediction relies on information which distinguishes which world we’re in.
For now I’m not sure to see where you’re going after that, i’m sorry ! Maybe i’ll think about it again and get it later.
I wouldn’t worry about that, I was mostly adding some relevant details rather than necessarily arguing against your points. The point about game of life was suggesting that it permits compression, which for me makes it harder to determine if it demonstrates the same sort of reducibility that quantum states might importantly have (or whatever the lowest level is which still has important degrees of freedom wrt prediction). The only accounts of this I’ve encountered suggest there is some important irreducibility in QM, but I’m not yet convinced there isn’t a suitable form of compression at some level for the purpose of AC.
Both macroscopic prediction and AC seem to depend on the feasibility of ‘flattening up’ from quantum states sufficiently cheaply that a pre-computed structure can support accurate macroscopic prediction or AC—if it is feasible, it stands to reason that it would allow capture to be cheap.
There is also an argument I didn’t go into which suggests that observers might typically find themselves in places that are hard / infeasible to capture for intentional reasons: a certain sort of simulator might be said to fully own anything it doesn’t have to share control of, which is suggestive of those states being higher value. This is a point in favor of irreducibility as a potential sim-blocker for simulators after the first if it’s targetable in the first place. For example, it might be possible to condition the small states a simulator is working with on large-state phenomena as a cryptographic sim-blocker. This then feeds into considerations about acausal trade among agents which do or do not use cryptographic sim-blockers due to feasibility.
I don’t know of anything working against the conclusion you’re entertaining, the overall argument is good. I expect an argument from QM and computational complexity could inform my uncertainty about whether the compression permitted in QM entails feasibility of computing states faster than physics.
Yudkowsky + Wolfram Debate
Some language to simplify some of the places where the debate got stuck.
Is-Ought
Analyzing how to preserve or act on preferences is a coherent thing to do, and it’s possible to do so without assuming a one true universal morality. Assume a preference ordering, and now you’re in the land of is, not ought, where there can be a correct answer (highest expected value).
Is There One Reality?
Let existence be defined to mean everything, all the math, all the indexical facts. “Ah, but you left out-” Nope, throw that in too. Everything. Existence is a pretty handy word for that; let’s reserve it for that purpose. As for any points about how our observations are compatible with multiple implementations: we’ve already lumped those into our description of a “unique reality”.
Noise, In MY Conformal Geometry?!
Noise is noise with respect to a prediction, and so is coherent to discuss. One can abstract away from certain details for the purpose of making a specific prediction; call the stuff that can be abstracted away from noise relative to that prediction.
Decoupled Outer And Inner Optimization Targets
Inclusive genetic fitness led to weirdos that like ice cream, but predictive loss may be a purer target than IGF. If we don’t press down on that insanely hard, it’s quite plausible that we get all the way to significantly superhuman generality without any unfortunate parallels to that issue. If you work at a frontier AI lab, probably don’t build agents in stupid ways or enable their being built too quickly; that seems like the greatest liability at present.