If we believe that our conscious experience is a computation, and we hold a universal prior which basically says that the multiverse consists of all Turing machines, being run in such a fashion that the more complex ones are less likely, it seems to be very suggestive for anthropics.
I visualize an array of all Turing machines, either with the simpler ones duplicated (which requires an infinite number of every finite-length machine, since there are twice as many of each machine of length n as of each machine of length n+1), or the more complex ones getting “thinner,” with their outputs stretching out into the future. Next, suppose we know we’ve observed a particular sequence of 1′s and 0′s. This is where I break with Solomonoff Induction—I don’t assume we’ve observed a prefix. Instead, assign each occurrence of the sequence anywhere in the output of any machine a probability inversely proportional to the complexity of the machine it’s on (or assign them all equal probabilities, if you’re imagining duplicated machines), normalized so that you get total probability one, of course. Then assign a sequence a probability of coming next equal to the sum of the probabilities of all machines where that sequence comes next.
Of course, any sequence is going to occur an infinite number of times, so each occurrence has zero probability. So what you actually have to do is truncate all the computations at some time step T, do the procedure from the previous paragraph, and hope the limit as T goes to infinity exists. It would be wonderful if you could prove it did for all sequences.
To convert this into an anthropic principle, I assume that a given conscious experience corresponds to some output sequence, or at least that things behave as if this were the case. Then you can treat the fact that you exist as an observation of a certain sequence (or of one of a certain class of sequences).
So what sort of anthropic principle does this lead to? Well, if we’re talking about different possible physical universes, then after you weight them for simplicity, you weight them for how dense their output is in conscious observer-moments (that is, sequences corresponding to conscious experiences). (This is assuming you don’t have more specific information. If you have red hair, and you know how many conscious experiences of having red hair a universe produces, you can weight by the density of that). So in the Presumptuous Philosopher case, where we have two physical theories differing in the size of the universe by an enormous factor, and agreeing on everything else, including the population density of conscious observers, anthropics tells us nothing. (I’m assuming that all stuff requires an equal amount of computation, or at least that computational intensity and consciousness are not too correlated. There may be room for refinement here). On the other hand, if we’re deciding between two universes of equal simplicity and equal size, but different numbers of conscious observer-moments, we should weight them according to the number of conscious observer-moments as we would in SIA. In cases where someone is flipping coins, creating people, and putting them in rooms, if we regard there as being two equally simple universes, one where the coin lands heads and one where the coin lands tails, then this principle looks like SIA.
The main downside I can see to this framework is that it seems to predict that, if there are any repeating universes we could be in, we should be almost certain we’re in one, and not in one that will experience heat death. This is a downside because, last I heard, our universe looks like it’s headed for heat death. Maybe earlier computation steps should be weighted more heavily? This could also guarantee the existence of the limit I discussed above, if it’s not already guaranteed.
If we believe that our conscious experience is a computation, and we hold a universal prior which basically says that the multiverse consists of all Turing machines, being run in such a fashion that the more complex ones are less likely, it seems to be very suggestive for anthropics.
I visualize an array of all Turing machines, either with the simpler ones duplicated (which requires an infinite number of every finite-length machine, since there are twice as many of each machine of length n as of each machine of length n+1), or the more complex ones getting “thinner,” with their outputs stretching out into the future. Next, suppose we know we’ve observed a particular sequence of 1′s and 0′s. This is where I break with Solomonoff Induction—I don’t assume we’ve observed a prefix. Instead, assign each occurrence of the sequence anywhere in the output of any machine a probability inversely proportional to the complexity of the machine it’s on (or assign them all equal probabilities, if you’re imagining duplicated machines), normalized so that you get total probability one, of course. Then assign a sequence a probability of coming next equal to the sum of the probabilities of all machines where that sequence comes next.
Of course, any sequence is going to occur an infinite number of times, so each occurrence has zero probability. So what you actually have to do is truncate all the computations at some time step T, do the procedure from the previous paragraph, and hope the limit as T goes to infinity exists. It would be wonderful if you could prove it did for all sequences.
To convert this into an anthropic principle, I assume that a given conscious experience corresponds to some output sequence, or at least that things behave as if this were the case. Then you can treat the fact that you exist as an observation of a certain sequence (or of one of a certain class of sequences).
So what sort of anthropic principle does this lead to? Well, if we’re talking about different possible physical universes, then after you weight them for simplicity, you weight them for how dense their output is in conscious observer-moments (that is, sequences corresponding to conscious experiences). (This is assuming you don’t have more specific information. If you have red hair, and you know how many conscious experiences of having red hair a universe produces, you can weight by the density of that). So in the Presumptuous Philosopher case, where we have two physical theories differing in the size of the universe by an enormous factor, and agreeing on everything else, including the population density of conscious observers, anthropics tells us nothing. (I’m assuming that all stuff requires an equal amount of computation, or at least that computational intensity and consciousness are not too correlated. There may be room for refinement here). On the other hand, if we’re deciding between two universes of equal simplicity and equal size, but different numbers of conscious observer-moments, we should weight them according to the number of conscious observer-moments as we would in SIA. In cases where someone is flipping coins, creating people, and putting them in rooms, if we regard there as being two equally simple universes, one where the coin lands heads and one where the coin lands tails, then this principle looks like SIA.
The main downside I can see to this framework is that it seems to predict that, if there are any repeating universes we could be in, we should be almost certain we’re in one, and not in one that will experience heat death. This is a downside because, last I heard, our universe looks like it’s headed for heat death. Maybe earlier computation steps should be weighted more heavily? This could also guarantee the existence of the limit I discussed above, if it’s not already guaranteed.