I already am thinking about it in those terms, so I’m not sure what’s going wrong here.
Would it have been clearer if the focusing question was more like “what is the probability that, if you manage to find a pair of mirrors that you can use to check the model number on the back of your head, you’ll see a model number corresponding to the heaver brain?”
I have nothing wrong with the probability request here, I have a problem with the scenario. What kind of evidence are you getting that makes these two and only these two outcomes possible? Solomonoff/Bayes would never rule out any outcome, just make some of them low probability.
I already am thinking about it in those terms, so I’m not sure what’s going wrong here.
Would it have been clearer if the focusing question was more like “what is the probability that, if you manage to find a pair of mirrors that you can use to check the model number on the back of your head, you’ll see a model number corresponding to the heaver brain?”
I have nothing wrong with the probability request here, I have a problem with the scenario. What kind of evidence are you getting that makes these two and only these two outcomes possible? Solomonoff/Bayes would never rule out any outcome, just make some of them low probability.
I’ve talked about the binding problem in Solomonoff before, see https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty and posts it links back to. See also “dust theory”.