OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:
P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer
and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.
One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation within U, then we’re in a sub-simulation when simulating U).
On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).
That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.
OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:
P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer
and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.
One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation within U, then we’re in a sub-simulation when simulating U).
On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).
That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.