It seems the computation you describe will run for infinite time, and will simulate infinitely many observers, but only finitely many in any given time period. Correct? If so, you still have my SIA problem.
If I am a “random” observer, then for any finite number N, I should expect to be simulated later than N steps into the whole computation. (Well, technically there is no way I could be sampled uniformly at random from a countably-infinite sequence of observers, except through some sort of limit construction; but let’s ignore this, and just suppose that for some “really big N” I have a “really small” probability of being simulated before N).
Now, imagine listing all the “small” finite universes which can be simulated in—say—fewer than 10^1000 steps from start to finish. There are at most 2^(10^1000) of those, and their simulations will all finish. So there must be some Nth computational step which happens after the very last small universe simulation has finished, and by the above argument I should expect myself to be simulated after step N. So there is still overwhelming prior probability that I find myself in a “big” (or infinite) universe. The SIA is still wiping out the small universes a priori.
It was a nice try though; I had to think about this one a bit...
Ok, we can posit that if any of the universes ends, we just re-start it from the beginning. Now if there is 1 universe that runs for 1000 years with 1000 observers, and 1 universe that runs forever with 1000 observers, and their laws of physics were equiprobable, then their SIA probabilities are also equiprobable. The observers in the finite universe will be duplicated infinite times, but I don’t think this is a problem (see Nick Bostrom’s duplication paper). Also, some infinite universes might have an infinite number of finite simulations inside them, so it’s somewhat likely for an observer to be in a finite universe simulated by an infinite universe.
I think you can deal with the infiniteness by noting that, for any sequence of observations, that sequence will be observed by some proportion of the observers in the multiverse. So you can still anticipate the future by comparing P(past observations + future observation) among the possible future observations.
Your example considers an infinite universe with 1000 observers (and then presumably an infinite amount of dead-space). You say this counts for the same weighted probability as a finite universe with 1000 observers (here assuming the universes had the same Levin probability originally).
OK, I’m with that so far: but then why doesn’t an infinite universe with 10^1000 observers count for 10^997 more weight than a finite universe with 1000 = 10^3 observers?
Finally, why doesn’t an infinite universe with infinitely many observers count for infinitely more weight than a finite universe with 1000 observers? I’m just trying to understand your metric here.
Alternatively, when you discuss re-running the finite 1000-observer universe from the start (so the 1000 observers are simulated over and over again), then is that supposed to increase the weight assigned to the finite universe? Perhaps you think that it should, but if so, why? Why should a finite universe which stops completely receive greater weight than an otherwise identical universe whose simulation just contines forever past the stop point with loads of dead space?
Your example considers an infinite universe with 1000 observers (and then presumably an infinite amount of dead-space). You say this counts for the same weighted probability as a finite universe with 1000 observers (here assuming the universes had the same Levin probability originally).
In the original example I was assuming the 1000 observers were immortal so they contribute more observer-seconds. I think this is a better presentation:
We have:
a finite universe. 1000 people are born at the beginning. The universe is destroyed and restarted after 1000 years. After it restarts another 1000 people are born, etc. etc.
an infinite universe. 1000 people are born at the beginning. Every 1000 years, everyone dies and 1000 more people are born.
If both have equal prior probability and efficiency, we should assign them equal weight. This is even though the second universe has infinitely more observers than (a single copy of) the finite universe.
Alternatively, when you discuss re-running the finite 1000-observer universe from the start (so the 1000 observers are simulated over and over again), then is that supposed to increase the weight assigned to the finite universe?
Yes.
Perhaps you think that it should, but if so, why?
Because there are more total observers. If the universe is restarted there are 1000 observers per run and infinite runs, as opposed to 1000 observers total.
Why should a finite universe which stops completely receive greater weight than an otherwise identical universe whose simulation just contines forever past the stop point with loads of dead space?
For one, only the first can be simulated by a machine in a finite universe. Also, in a universe with infinite time but not memory, only the first can be simulated infinite times.
Also, the universe with the dead space might contain simulations of finite universes (after all, with infinite time and quantum mechanics everything happens). Then almost all of the observers in the infinite universe are simulated inside a finite universe, not in the infinite universe proper.
Another argument: if the 2 universes (infinite with infinite observers (perhaps by restarting), infinite with finite observers) are run in parallel, almost all observers will be in the first universe. It seems like it shouldn’t make a difference if the universes are being run in parallel or if just one of them was chosen randomly to be run.
OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:
P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer
and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.
One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation within U, then we’re in a sub-simulation when simulating U).
On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).
That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.
It seems the computation you describe will run for infinite time, and will simulate infinitely many observers, but only finitely many in any given time period. Correct? If so, you still have my SIA problem.
If I am a “random” observer, then for any finite number N, I should expect to be simulated later than N steps into the whole computation. (Well, technically there is no way I could be sampled uniformly at random from a countably-infinite sequence of observers, except through some sort of limit construction; but let’s ignore this, and just suppose that for some “really big N” I have a “really small” probability of being simulated before N).
Now, imagine listing all the “small” finite universes which can be simulated in—say—fewer than 10^1000 steps from start to finish. There are at most 2^(10^1000) of those, and their simulations will all finish. So there must be some Nth computational step which happens after the very last small universe simulation has finished, and by the above argument I should expect myself to be simulated after step N. So there is still overwhelming prior probability that I find myself in a “big” (or infinite) universe. The SIA is still wiping out the small universes a priori.
It was a nice try though; I had to think about this one a bit...
Ok, we can posit that if any of the universes ends, we just re-start it from the beginning. Now if there is 1 universe that runs for 1000 years with 1000 observers, and 1 universe that runs forever with 1000 observers, and their laws of physics were equiprobable, then their SIA probabilities are also equiprobable. The observers in the finite universe will be duplicated infinite times, but I don’t think this is a problem (see Nick Bostrom’s duplication paper). Also, some infinite universes might have an infinite number of finite simulations inside them, so it’s somewhat likely for an observer to be in a finite universe simulated by an infinite universe.
I think you can deal with the infiniteness by noting that, for any sequence of observations, that sequence will be observed by some proportion of the observers in the multiverse. So you can still anticipate the future by comparing P(past observations + future observation) among the possible future observations.
Sorry, I don’t quite follow this…
Your example considers an infinite universe with 1000 observers (and then presumably an infinite amount of dead-space). You say this counts for the same weighted probability as a finite universe with 1000 observers (here assuming the universes had the same Levin probability originally).
OK, I’m with that so far: but then why doesn’t an infinite universe with 10^1000 observers count for 10^997 more weight than a finite universe with 1000 = 10^3 observers?
Finally, why doesn’t an infinite universe with infinitely many observers count for infinitely more weight than a finite universe with 1000 observers? I’m just trying to understand your metric here.
Alternatively, when you discuss re-running the finite 1000-observer universe from the start (so the 1000 observers are simulated over and over again), then is that supposed to increase the weight assigned to the finite universe? Perhaps you think that it should, but if so, why? Why should a finite universe which stops completely receive greater weight than an otherwise identical universe whose simulation just contines forever past the stop point with loads of dead space?
In the original example I was assuming the 1000 observers were immortal so they contribute more observer-seconds. I think this is a better presentation:
We have:
a finite universe. 1000 people are born at the beginning. The universe is destroyed and restarted after 1000 years. After it restarts another 1000 people are born, etc. etc.
an infinite universe. 1000 people are born at the beginning. Every 1000 years, everyone dies and 1000 more people are born.
If both have equal prior probability and efficiency, we should assign them equal weight. This is even though the second universe has infinitely more observers than (a single copy of) the finite universe.
Yes.
Because there are more total observers. If the universe is restarted there are 1000 observers per run and infinite runs, as opposed to 1000 observers total.
For one, only the first can be simulated by a machine in a finite universe. Also, in a universe with infinite time but not memory, only the first can be simulated infinite times.
Also, the universe with the dead space might contain simulations of finite universes (after all, with infinite time and quantum mechanics everything happens). Then almost all of the observers in the infinite universe are simulated inside a finite universe, not in the infinite universe proper.
Another argument: if the 2 universes (infinite with infinite observers (perhaps by restarting), infinite with finite observers) are run in parallel, almost all observers will be in the first universe. It seems like it shouldn’t make a difference if the universes are being run in parallel or if just one of them was chosen randomly to be run.
OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:
P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer
and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.
One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation within U, then we’re in a sub-simulation when simulating U).
On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).
That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.