Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this;
Shouldn’t we strongly expect this weighting, by Solomonoff induction?
Allow me to paraphrase him with some of my own thoughts.
Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like “all possible worlds”, or computable worlds anyway, and they’re each equipped with a little number that discounts them by their complexity. So maybe that’s like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that’s a really weird mental image, and I don’t want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn’t exist very much? I get a mental image of fog or a ghost or something. That’s silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.
So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it’s a very nice and useful little principle. Or at least its useful in this world. But maybe I’m thinking that in multiple worlds, some of which I’m about to find myself having negative six octarine tentacles. So occam’s razor is useful in … some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn’t seem like a very big leap to go from “Occam’s razor is useful” to “Occam’s razor is useful because when using it, my beliefs reflect and exploit the structure of reality”, or to “Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction”.
Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don’t exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That’s an almost satisfying explanation, it would sweep away a lot of my confused questions, but It’s not quite obviously right to me, and that’s the standard I hold myself to. One thing that feels icky about the idea of “degree of existence” actually being “degree of decision importance” is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don’t care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can’t even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it’s impossible for them to exist?
Or maybe “a logically impossible event” isn’t meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying “These two proofs have all the same statements, just in different order, so they depend on the same facts”, or “these two proofs can be compressed by extracting a common subproof”, or “using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3″ in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.
And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn’t seem like it’s going to get me closer to the truth either. Aiiiiiiieeee.
I notice that I am meta-confused...
Shouldn’t we strongly expect this weighting, by Solomonoff induction?
Probability is not obviously amount of existence.
Allow me to paraphrase him with some of my own thoughts.
Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like “all possible worlds”, or computable worlds anyway, and they’re each equipped with a little number that discounts them by their complexity. So maybe that’s like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that’s a really weird mental image, and I don’t want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn’t exist very much? I get a mental image of fog or a ghost or something. That’s silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.
So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it’s a very nice and useful little principle. Or at least its useful in this world. But maybe I’m thinking that in multiple worlds, some of which I’m about to find myself having negative six octarine tentacles. So occam’s razor is useful in … some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn’t seem like a very big leap to go from “Occam’s razor is useful” to “Occam’s razor is useful because when using it, my beliefs reflect and exploit the structure of reality”, or to “Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction”.
Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don’t exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That’s an almost satisfying explanation, it would sweep away a lot of my confused questions, but It’s not quite obviously right to me, and that’s the standard I hold myself to. One thing that feels icky about the idea of “degree of existence” actually being “degree of decision importance” is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don’t care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can’t even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it’s impossible for them to exist?
Or maybe “a logically impossible event” isn’t meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying “These two proofs have all the same statements, just in different order, so they depend on the same facts”, or “these two proofs can be compressed by extracting a common subproof”, or “using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3″ in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.
And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn’t seem like it’s going to get me closer to the truth either. Aiiiiiiieeee.