Your example considers an infinite universe with 1000 observers (and then presumably an infinite amount of dead-space). You say this counts for the same weighted probability as a finite universe with 1000 observers (here assuming the universes had the same Levin probability originally).
In the original example I was assuming the 1000 observers were immortal so they contribute more observer-seconds. I think this is a better presentation:
We have:
a finite universe. 1000 people are born at the beginning. The universe is destroyed and restarted after 1000 years. After it restarts another 1000 people are born, etc. etc.
an infinite universe. 1000 people are born at the beginning. Every 1000 years, everyone dies and 1000 more people are born.
If both have equal prior probability and efficiency, we should assign them equal weight. This is even though the second universe has infinitely more observers than (a single copy of) the finite universe.
Alternatively, when you discuss re-running the finite 1000-observer universe from the start (so the 1000 observers are simulated over and over again), then is that supposed to increase the weight assigned to the finite universe?
Yes.
Perhaps you think that it should, but if so, why?
Because there are more total observers. If the universe is restarted there are 1000 observers per run and infinite runs, as opposed to 1000 observers total.
Why should a finite universe which stops completely receive greater weight than an otherwise identical universe whose simulation just contines forever past the stop point with loads of dead space?
For one, only the first can be simulated by a machine in a finite universe. Also, in a universe with infinite time but not memory, only the first can be simulated infinite times.
Also, the universe with the dead space might contain simulations of finite universes (after all, with infinite time and quantum mechanics everything happens). Then almost all of the observers in the infinite universe are simulated inside a finite universe, not in the infinite universe proper.
Another argument: if the 2 universes (infinite with infinite observers (perhaps by restarting), infinite with finite observers) are run in parallel, almost all observers will be in the first universe. It seems like it shouldn’t make a difference if the universes are being run in parallel or if just one of them was chosen randomly to be run.
OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:
P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer
and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.
One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation within U, then we’re in a sub-simulation when simulating U).
On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).
That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.
In the original example I was assuming the 1000 observers were immortal so they contribute more observer-seconds. I think this is a better presentation:
We have:
a finite universe. 1000 people are born at the beginning. The universe is destroyed and restarted after 1000 years. After it restarts another 1000 people are born, etc. etc.
an infinite universe. 1000 people are born at the beginning. Every 1000 years, everyone dies and 1000 more people are born.
If both have equal prior probability and efficiency, we should assign them equal weight. This is even though the second universe has infinitely more observers than (a single copy of) the finite universe.
Yes.
Because there are more total observers. If the universe is restarted there are 1000 observers per run and infinite runs, as opposed to 1000 observers total.
For one, only the first can be simulated by a machine in a finite universe. Also, in a universe with infinite time but not memory, only the first can be simulated infinite times.
Also, the universe with the dead space might contain simulations of finite universes (after all, with infinite time and quantum mechanics everything happens). Then almost all of the observers in the infinite universe are simulated inside a finite universe, not in the infinite universe proper.
Another argument: if the 2 universes (infinite with infinite observers (perhaps by restarting), infinite with finite observers) are run in parallel, almost all observers will be in the first universe. It seems like it shouldn’t make a difference if the universes are being run in parallel or if just one of them was chosen randomly to be run.
OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:
P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer
and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.
One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation within U, then we’re in a sub-simulation when simulating U).
On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).
That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.