I’ll be using Full non-indexical conditioning (FNC) as an example of anthropic reasoning. Given that, it’s useful to know that FNC is (time) inconsistent: the expected future probability is not the same as the current probability. I can’t find a full write-up of this specific fact, so I thought I’d present it here for reference.
In FNC:
one should condition on all the evidence available, including all the details of one’s memory, but without considering “indexical” information regarding one’s place in the universe (as opposed to what the universe contains).
So, suppose that there were either 1 or ω initially identical copies of you in the universe, with equal probability. Your copies never communicate or get extra evidence of each others existence. Each copy of has seen (and remembered) the outcome of Ω independent random bits, for 2Ω>>ω. You yourself have seen the sequence s (which sequence s is doesn’t actually matter, as they are all equiprobable).
Then, by FNC reasoning, the probability of a copy of you seeing s in a universe with only 1 of you is 1/2Ω, while the probability of that happening in the a universe with ω copies of you is:
Thus FNC will update towards the large universe by a ratio that is roughly ω/(2Ω):1/(2Ω) - in other words, ω:1. Translated into probabilities, this gives a probability close to 1/(ω+1) for the small universe, and ω/(ω+1) for the large one.
This makes FNC very close to SIA; in fact, in the limit of finitely many copies remembering infinitely many different random bits, FNC is exactly SIA.
But now consider the situation before any of your copies see any random bits (or, at least, where they only see non-independent random bits). Then all copies have seen the same sequence, so the update is 1:1; ie there is no anthropic update at all, and the probabilities of either universe remains 1/2.
But if you know that all your copies will see an independent sequence Ω random bits, then you can predict, with certainty, that your future probabilities will be (almost exactly) 1/(ω+1) and ω/(ω+1), rather than 1/2 and 1/2. How can you predict with certainty? Because you know that you will see some sequence s, and all sequence s lead to the same FNC updates.
So FNC is time inconsistent.
Forgetful agents.
More strangely, FNC can be time inconsistent the other way, if your copies are forgetful. If they start forgetting large initial pieces of the sequence s, then all of their/your probabilities will start to move back towards the 1/2 and 1/2 probabilities.
For example, if your copies have forgotten all but the last bit of the sequence, then the probability of some copy seeing either 0 or 1 in a large universe is 1−(1−12)ω=1−(1−ω/2ω)ω. For large ω, this is approximately 1−exp(−ω/2)≈1, by the limit expression for the exponential.
Then the update ratio will be 1:1/2, and the probabilities of large and small universes will be close to 2/3 and 1/3.
Anthropics: Full Non-indexical Conditioning (FNC) is inconsistent
I’ll be using Full non-indexical conditioning (FNC) as an example of anthropic reasoning. Given that, it’s useful to know that FNC is (time) inconsistent: the expected future probability is not the same as the current probability. I can’t find a full write-up of this specific fact, so I thought I’d present it here for reference.
In FNC:
So, suppose that there were either 1 or ω initially identical copies of you in the universe, with equal probability. Your copies never communicate or get extra evidence of each others existence. Each copy of has seen (and remembered) the outcome of Ω independent random bits, for 2Ω>>ω. You yourself have seen the sequence s (which sequence s is doesn’t actually matter, as they are all equiprobable).
Then, by FNC reasoning, the probability of a copy of you seeing s in a universe with only 1 of you is 1/2Ω, while the probability of that happening in the a universe with ω copies of you is:
1−(1−12Ω)ω≈1−(1−ω2Ω)=ω2Ω, by the binomial approximation.
Thus FNC will update towards the large universe by a ratio that is roughly ω/(2Ω):1/(2Ω) - in other words, ω:1. Translated into probabilities, this gives a probability close to 1/(ω+1) for the small universe, and ω/(ω+1) for the large one.
This makes FNC very close to SIA; in fact, in the limit of finitely many copies remembering infinitely many different random bits, FNC is exactly SIA.
But now consider the situation before any of your copies see any random bits (or, at least, where they only see non-independent random bits). Then all copies have seen the same sequence, so the update is 1:1; ie there is no anthropic update at all, and the probabilities of either universe remains 1/2.
But if you know that all your copies will see an independent sequence Ω random bits, then you can predict, with certainty, that your future probabilities will be (almost exactly) 1/(ω+1) and ω/(ω+1), rather than 1/2 and 1/2. How can you predict with certainty? Because you know that you will see some sequence s, and all sequence s lead to the same FNC updates.
So FNC is time inconsistent.
Forgetful agents.
More strangely, FNC can be time inconsistent the other way, if your copies are forgetful. If they start forgetting large initial pieces of the sequence s, then all of their/your probabilities will start to move back towards the 1/2 and 1/2 probabilities.
For example, if your copies have forgotten all but the last bit of the sequence, then the probability of some copy seeing either 0 or 1 in a large universe is 1−(1−12)ω=1−(1−ω/2ω)ω. For large ω, this is approximately 1−exp(−ω/2)≈1, by the limit expression for the exponential.
Then the update ratio will be 1:1/2, and the probabilities of large and small universes will be close to 2/3 and 1/3.