When actually run, it makes two pieces of the territory change so that they contain a pattern that we would recognize as “5”.
Right, but it doesn’t attach little “” tags to that pattern.
Can you give an example of information with that property?
Well, trivially not, because by giving the example I create a representation. But, does a theorem become true when it is proven? That seems to me to be absurd. Counterfactually, suppose there were no minds. Would that prevent it from being true that “PA proves 2+2=4”? That also seems absurd. I can’t prove it’s absurd, but that’s because a rock doesn’t implement modus ponens (no universally compelling arguments).
If X will happen tomorrow, then it is a fact that X will happen tomorrow, even though (ignoring for now timeless physics) tomorrow “doesn’t exist yet”, and the information “X will happen tomorrow” needn’t be represented anywhere to be true; it inheres in the state of the universe today + {equations of physics}. Information which can be arrived at by computation from other, existing information, exists—or perhaps we should move the ‘other, existing information’ across the turnstile: it is an existing truth that (Information which can be arrived at by computation from foo) can be arrived at by computation from foo. Tautologies are true.
But the actual-territory is not (or at least, need not be) causally influenced by the territory inside your head that’s implementing the map.
I can’t speak for Tegmark, of course, but what I’m saying is that “equations” are the territory, and the stuff that looks to us like rocks and trees and people and the Moon is just a map.
On the contrary, the conclusion is that things must exist even when they don’t “exist”—where that quotation refers to some silly little savanna-concept we have in our brains, about rocks and trees and people and the Moon. Which don’t exist.
That’s because (in my model) the conceptual entities are the bedrock of the hierarchy, and physical existence is strongly analogous in this model to qualia in a physical-realist model. After all, “{equations}” and “rocks following {equations}” both give the same result for “value of X at time T”, so the existence of rocks is epiphenomenal to the equations.
But a simulated me, existing only as information represented by electrons in a computer, could say “equations” just as loudly. So why couldn’t a purely informational me, existing as unrepresented information, say “equations” too? Physical reality is a burdensome detail which doesn’t add any explanatory power to your model; the claim that information needs to be represented in order for conscious entities contained within that information to exist seems to me to have no evidence backing it up, nor indeed to be capable of having such evidence, and therefore Occam demands that we frame our model in such a way as to make that claim inexpressible. It’s rather like moving from configuration space to relative configuration space; unmeasurable claims become unreal.
It doesn’t need to “come to be”; ‘time’ and ‘causality’ are parochial notions, concepts we can use to model things within our universe. Expecting the multiverse to obey them seems to me to be a Mind Projection Fallacy. A block universe just is.
Thanks for the insightful critique, by the way—it’s helping me to understand the arguments better and see weak points that I wouldn’t have noticed myself. I’m still not sure whether my theory is circular, nor whether I should care if it is.