Various people have said that Solomonoff Induction (SI) accords with the Self-Sampling Assumption (SSA) more than the Self-Indicating Assumption (SIA). See these posts and the comments on them:
I was surprised, because I like both SI and SIA. Both seem correct to me, and I carefully considered the apparent contradiction. I believe that I have dissolved the contradiction, and that SI, properly applied, actually implies SIA. I can’t actually prove this broad claim, but I will at least argue that SI is a thirder in Sleeping Beauty, and gesture in the direction of what I think is wrong with the claims in the linked post. As a bonus, if you read till the end I’ll throw in an intuition-generator for why SIA actually gives the correct answer in Presumptuous Philosopher.
First, let me reconstruct the contradiction in the Sleeping Beauty context, and explain why it might seem that SI is a halfer.
Naive view:
There are three possible outcomes: Monday-Tails (MT), Monday-Heads (MH) and Tuesday-Heads (TH). Each of these three outcomes are equally simple, therefore the machines encoding each will get equal weighting and the probabilities are all 1⁄3.
Antithesis:
MT is actually simpler than MH. Why? Because if you know that it was heads, you still need to be told that it’s Monday—but if you know that it’s tails, then you already know that it’s Monday. MT is one bit simpler than MH and therefore is twice as likely, under SI. SI is a halfer. Note that this is roughly the same argument as in the Presumptuous Philosopher post—it takes more information to encode “where you are” if there’s many copies of you.
Synthesis:
Wait a minute. By equivalent logic, TH is simpler than MH—if you know that it’s Tuesday, you automatically know that it was heads! TH is then equal to MT, but MH still seems more complicated—to “locate” it you need two bits of info.
The core insight needed to solve this puzzle is that there’s two different ways to encode MH—either “it’s Monday and also heads”, or “it’s heads and also Monday”. So each of those encodings are more complicated by one bit than the other options, but there’s twice as many such encodings. In the end, MH=TH=MT=1/3.
I strongly suspect the same thing ends up happening in the full Presumptuous Philosopher scenario, but it’s difficult to show rigorously. One can easily reason that if there’s 100 observers, there’s multiple ways to encode each—“the 2nd observer”, “the one after the 1st observer”, “the one before the 3rd observer” all point to the same person. But it’s much more difficult to estimate how it all adds up. I’m fairly confident based on the above argument that it all adds up to the thirder position in Sleeping Beauty. I think that in Presumptuous Philosopher it adds up such that you get full SIA, with no discount for the complexity of specifying individual observers. But I can’t prove that.
Presumptuous Philosopher intuition-generator
You bump into Omega, who’s sitting in front of a big red button that he clearly just pushed. He tells you that up until 60 seconds ago, when he pushed the button, there were a trillion trillion trillion observers in the universe. The button, when pushed, flips an internal fair coin. If heads, then it Thanos-style kills everyone in the universe at random except for one trillion people. If tails, it does nothing. Either way, everyone that survives has this conversation with Omega. What are the odds that the coin was heads?
I think it’s quite plausible to say that it’s overwhelmingly unlikely for it to have been heads, given the fact that you survived. This scenario is identical in relevant aspects to Presumptuous Philosopher.
Solomonoff Induction and Sleeping Beauty
Various people have said that Solomonoff Induction (SI) accords with the Self-Sampling Assumption (SSA) more than the Self-Indicating Assumption (SIA). See these posts and the comments on them:
https://www.lesswrong.com/posts/omqnrTRnHs3pSYef2/down-with-solomonoff-induction-up-with-the-presumptuous
https://www.lesswrong.com/posts/sEij9C9MnzEs8kaBc/the-presumptuous-philosopher-self-locating-information-and
I was surprised, because I like both SI and SIA. Both seem correct to me, and I carefully considered the apparent contradiction. I believe that I have dissolved the contradiction, and that SI, properly applied, actually implies SIA. I can’t actually prove this broad claim, but I will at least argue that SI is a thirder in Sleeping Beauty, and gesture in the direction of what I think is wrong with the claims in the linked post. As a bonus, if you read till the end I’ll throw in an intuition-generator for why SIA actually gives the correct answer in Presumptuous Philosopher.
First, let me reconstruct the contradiction in the Sleeping Beauty context, and explain why it might seem that SI is a halfer.
Naive view:
There are three possible outcomes: Monday-Tails (MT), Monday-Heads (MH) and Tuesday-Heads (TH). Each of these three outcomes are equally simple, therefore the machines encoding each will get equal weighting and the probabilities are all 1⁄3.
Antithesis:
MT is actually simpler than MH. Why? Because if you know that it was heads, you still need to be told that it’s Monday—but if you know that it’s tails, then you already know that it’s Monday. MT is one bit simpler than MH and therefore is twice as likely, under SI. SI is a halfer. Note that this is roughly the same argument as in the Presumptuous Philosopher post—it takes more information to encode “where you are” if there’s many copies of you.
Synthesis:
Wait a minute. By equivalent logic, TH is simpler than MH—if you know that it’s Tuesday, you automatically know that it was heads! TH is then equal to MT, but MH still seems more complicated—to “locate” it you need two bits of info.
The core insight needed to solve this puzzle is that there’s two different ways to encode MH—either “it’s Monday and also heads”, or “it’s heads and also Monday”. So each of those encodings are more complicated by one bit than the other options, but there’s twice as many such encodings. In the end, MH=TH=MT=1/3.
I strongly suspect the same thing ends up happening in the full Presumptuous Philosopher scenario, but it’s difficult to show rigorously. One can easily reason that if there’s 100 observers, there’s multiple ways to encode each—“the 2nd observer”, “the one after the 1st observer”, “the one before the 3rd observer” all point to the same person. But it’s much more difficult to estimate how it all adds up. I’m fairly confident based on the above argument that it all adds up to the thirder position in Sleeping Beauty. I think that in Presumptuous Philosopher it adds up such that you get full SIA, with no discount for the complexity of specifying individual observers. But I can’t prove that.
Presumptuous Philosopher intuition-generator
You bump into Omega, who’s sitting in front of a big red button that he clearly just pushed. He tells you that up until 60 seconds ago, when he pushed the button, there were a trillion trillion trillion observers in the universe. The button, when pushed, flips an internal fair coin. If heads, then it Thanos-style kills everyone in the universe at random except for one trillion people. If tails, it does nothing. Either way, everyone that survives has this conversation with Omega. What are the odds that the coin was heads?
I think it’s quite plausible to say that it’s overwhelmingly unlikely for it to have been heads, given the fact that you survived. This scenario is identical in relevant aspects to Presumptuous Philosopher.