There’s no such universe. We exist simultaneously in all universes consistent with our experience.
That’s an interesting way to look at things.
I’m curious; is it more useful to look at it that way then the more standard separation of subjective experience on one hand with objective reality on the other that most people make? When does that viewpoint make different predictions, if ever? Is it easier to use that as a viewpoint?
Your viewpoint does make sense; at least at the quantum-mechanics level, it probably is a valid way to view the universe. At a macro level, though, I think “all universes consistent with our experience” are probably almost exactally the same as “there is one objective universe”; it’s just that we don’t have brains capable of using the data we already have to eliminate most of the possibilities. A superintendence with the same data set we have would probably be able to figure out what “objective reality” looks like 99.9% of the time (on a macro level, at least); which means that most of your “possible universes” can’t actually exist in a way that’s consistent with our experiences, we’re just not smart enough to figure that out yet.
I’m curious; is it more useful to look at it that way then the more standard separation of subjective experience on one hand with objective reality on the other that most people make? When does that viewpoint make different predictions, if ever? Is it easier to use that as a viewpoint?
If you assume you exist in a single “objective” universe then you should be able to assign probabilities to statements of the form “I am in universe U”. However, it is not generally meaningful, as the following example demonstrates.
Suppose there is a coin which you know to be either a fair coin or a biased coin with 0.1 probability for heads and 0.9 probability for tails. Suppose your subjective probability of the coin being fair is 50%. After observing a sequence of coin tosses you should be able to update your subjective probability.
Now let’s introduce another assumption: When the coin lands tails, you are split into 9 copies. When the coin lands heads nothing special happens. Consider again a sequence of coin tosses. How should you update your probability of the coin being fair? Should you assume that because of the 9 copy formation your subjective a priori probability for getting tails is multiplied by 9? The Anthropic Trilemma raises its ugly head.
IMO, the right answer is that of UDT: There are no meaningful subjective expectations. There are only answers to decision theoretic questions, e.g. questions of the sort “on what should you bet assuming the winnings of all your clones are accumulated in a given manner and you want to maximize the total profit”. Therefore there is also no meaningful way to perform a Bayesian update, i.e. there are no meaningful epistemic probablities.
Everything becomes clear once you acknowledge all possibilities coexist and your decisions affect all of them. However, when you’re computing your utility you should weight these possibilities according to the Solomonoff prior. In my view, the weights represents how real a given possibility is (amount of “magic reality fluid”). In Coscott’s view it is just a part of the utility function.
which means that most of your “possible universes” can’t actually exist in a way that’s consistent with our experiences, we’re just not smart enough to figure that out yet.
Not exactly. There is no way to rule out e.g. you seeing purple pumpkins falling out of the sky in the next second. It is not inconsistent, it is just improbable. Worse, since subjective expectations don’t make sense, you can’t even say it’s improbable. The only thing you can say is that you should be making your decisions as if purple pumpkins are not going to fall out of the sky.
Not exactly. There is no way to rule out e.g. you seeing purple pumpkins falling out of the sky in the next second. It is not inconsistent, it is just improbable.
Well, let me put it this way. If there is no mathematically consistent and logically consistent universe where everything that I already know is true is actually true, and where purple pumpkins are going to suddenly fall out of the sky, then it is impossible for it to happen. That is true even if I, personally, am not intelligent enough to do that math to demonstrate that that is not possible based on my previous observations.
You will never experience two different things that are actually logically inconstant with each other. Which means that every time you experience anything, it automatically rules out any number of possibilities, and that’s true no matter if you know that or not.
I suspect (although I don’t know for sure) that a superintelligence would be able to rule out most possibilities with a fairly small amount of hard evidence, to a much greater extent then we can. So that means that if you have access to that same information, then many things are, in fact, impossible for you to ever experience because they’re inconstant with things you already know, even if no human or group of humans has the intelligence to actually prove that they’re inconsistent.
That’s an interesting way to look at things.
I’m curious; is it more useful to look at it that way then the more standard separation of subjective experience on one hand with objective reality on the other that most people make? When does that viewpoint make different predictions, if ever? Is it easier to use that as a viewpoint?
Your viewpoint does make sense; at least at the quantum-mechanics level, it probably is a valid way to view the universe. At a macro level, though, I think “all universes consistent with our experience” are probably almost exactally the same as “there is one objective universe”; it’s just that we don’t have brains capable of using the data we already have to eliminate most of the possibilities. A superintendence with the same data set we have would probably be able to figure out what “objective reality” looks like 99.9% of the time (on a macro level, at least); which means that most of your “possible universes” can’t actually exist in a way that’s consistent with our experiences, we’re just not smart enough to figure that out yet.
If you assume you exist in a single “objective” universe then you should be able to assign probabilities to statements of the form “I am in universe U”. However, it is not generally meaningful, as the following example demonstrates.
Suppose there is a coin which you know to be either a fair coin or a biased coin with 0.1 probability for heads and 0.9 probability for tails. Suppose your subjective probability of the coin being fair is 50%. After observing a sequence of coin tosses you should be able to update your subjective probability.
Now let’s introduce another assumption: When the coin lands tails, you are split into 9 copies. When the coin lands heads nothing special happens. Consider again a sequence of coin tosses. How should you update your probability of the coin being fair? Should you assume that because of the 9 copy formation your subjective a priori probability for getting tails is multiplied by 9? The Anthropic Trilemma raises its ugly head.
IMO, the right answer is that of UDT: There are no meaningful subjective expectations. There are only answers to decision theoretic questions, e.g. questions of the sort “on what should you bet assuming the winnings of all your clones are accumulated in a given manner and you want to maximize the total profit”. Therefore there is also no meaningful way to perform a Bayesian update, i.e. there are no meaningful epistemic probablities.
Everything becomes clear once you acknowledge all possibilities coexist and your decisions affect all of them. However, when you’re computing your utility you should weight these possibilities according to the Solomonoff prior. In my view, the weights represents how real a given possibility is (amount of “magic reality fluid”). In Coscott’s view it is just a part of the utility function.
Not exactly. There is no way to rule out e.g. you seeing purple pumpkins falling out of the sky in the next second. It is not inconsistent, it is just improbable. Worse, since subjective expectations don’t make sense, you can’t even say it’s improbable. The only thing you can say is that you should be making your decisions as if purple pumpkins are not going to fall out of the sky.
Well, let me put it this way. If there is no mathematically consistent and logically consistent universe where everything that I already know is true is actually true, and where purple pumpkins are going to suddenly fall out of the sky, then it is impossible for it to happen. That is true even if I, personally, am not intelligent enough to do that math to demonstrate that that is not possible based on my previous observations.
You will never experience two different things that are actually logically inconstant with each other. Which means that every time you experience anything, it automatically rules out any number of possibilities, and that’s true no matter if you know that or not.
I suspect (although I don’t know for sure) that a superintelligence would be able to rule out most possibilities with a fairly small amount of hard evidence, to a much greater extent then we can. So that means that if you have access to that same information, then many things are, in fact, impossible for you to ever experience because they’re inconstant with things you already know, even if no human or group of humans has the intelligence to actually prove that they’re inconsistent.