We can distinguish two cases—one case where there is some physical difference out there that you could find if you looked for it, and another case (e.g. trying to put probability distributions over what the universe looks like outside our lightcone) where you have different theories that really don’t have any empirical consequence.
In the first case, I don’t think it’s begging the question at all to say that you should have some probability distribution over those future empirical results, because the best part of probability distributions is how the capture what we expect about future empirical results. This should not stop working just because there might be someone next door who has the same memories as me. And absolutely this is about epistemics. We can phrase the Sleeping Beauty problem entirely in terms of ordinary empirical questions about the outside world—if you can give me a probability distribution over what time it is and just call it ordinary belief, Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief. You can use the same reasoning process.
In the case where there is no empirical difference, then yes, I think it’s ultimately about Solomonoff induction, which is significantly more subjective (allowing a choice of “programming language” that can change what you think is likely, with no empirical evidence to ever change your mind). But again this isn’t about practical consequences. If we’re in a simulation (I’m somewhat doubtful on the ancestor simulation premise, myself), I don’t think the right answer is “somehow fool ourselves into thinking we’re not in a simulation so we can take good actions.” I’d rather correctly guess whether I’m in a simulation and then take good actions anyhow.
Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief
But the whole question is about how Beauty should decide on her probabilities before seeing any evidence, right? What I’m saying is that she should do that with reference to her intended goals(or, just decide probabilities aren’t useful in this context)
I’m taking a behaviorist/decision-theoretic view on probability here—I’m saying that we can define an agent’s probability distribution over worlds in terms of its decision function and utility function. An agent definitionally believes an event will occur with probability p if it will sacrifice a resource worth <p utilons to get a certificate paying out 1 utilon if the event comes to pass.
I’d rather correctly guess whether I’m in a simulation and then take good actions anyhow.
But what does ‘correctly’ actually mean here? It can’t mean that we’ll eventually see clear signs of a simulation, as we’re specifically positing there’s no observable differences. Does it mean ‘the Solomonoff prior puts most of the weight for our experiences inside a simulation’? But we would only say this means ‘correctly’ because S.I. seems like a good abstraction of our normal sense of reality. But ‘UDT, with a utility function weighted by the complexity of the world’ seems like just as good of an abstraction, so it’s not clear why we should prefer one or the other. (Note the ‘effective probability’ derived from UDT is not the same as the complexity weighting)
I actually think there is an interesting duality here—within this framework, as moral actors agents are supposed to use UDT, but as moral patients they are weighted by Solomonoff probabilities. I suspect there’s an alternative theory of rationality that can better integrate these two aspects, but for now I feel like UDT is the more useful of the two, at least for answering anthropic/decision problems.
We can distinguish two cases—one case where there is some physical difference out there that you could find if you looked for it, and another case (e.g. trying to put probability distributions over what the universe looks like outside our lightcone) where you have different theories that really don’t have any empirical consequence.
In the first case, I don’t think it’s begging the question at all to say that you should have some probability distribution over those future empirical results, because the best part of probability distributions is how the capture what we expect about future empirical results. This should not stop working just because there might be someone next door who has the same memories as me. And absolutely this is about epistemics. We can phrase the Sleeping Beauty problem entirely in terms of ordinary empirical questions about the outside world—if you can give me a probability distribution over what time it is and just call it ordinary belief, Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief. You can use the same reasoning process.
In the case where there is no empirical difference, then yes, I think it’s ultimately about Solomonoff induction, which is significantly more subjective (allowing a choice of “programming language” that can change what you think is likely, with no empirical evidence to ever change your mind). But again this isn’t about practical consequences. If we’re in a simulation (I’m somewhat doubtful on the ancestor simulation premise, myself), I don’t think the right answer is “somehow fool ourselves into thinking we’re not in a simulation so we can take good actions.” I’d rather correctly guess whether I’m in a simulation and then take good actions anyhow.
But the whole question is about how Beauty should decide on her probabilities before seeing any evidence, right? What I’m saying is that she should do that with reference to her intended goals(or, just decide probabilities aren’t useful in this context)
I’m taking a behaviorist/decision-theoretic view on probability here—I’m saying that we can define an agent’s probability distribution over worlds in terms of its decision function and utility function. An agent definitionally believes an event will occur with probability p if it will sacrifice a resource worth <p utilons to get a certificate paying out 1 utilon if the event comes to pass.
But what does ‘correctly’ actually mean here? It can’t mean that we’ll eventually see clear signs of a simulation, as we’re specifically positing there’s no observable differences. Does it mean ‘the Solomonoff prior puts most of the weight for our experiences inside a simulation’? But we would only say this means ‘correctly’ because S.I. seems like a good abstraction of our normal sense of reality. But ‘UDT, with a utility function weighted by the complexity of the world’ seems like just as good of an abstraction, so it’s not clear why we should prefer one or the other. (Note the ‘effective probability’ derived from UDT is not the same as the complexity weighting)
I actually think there is an interesting duality here—within this framework, as moral actors agents are supposed to use UDT, but as moral patients they are weighted by Solomonoff probabilities. I suspect there’s an alternative theory of rationality that can better integrate these two aspects, but for now I feel like UDT is the more useful of the two, at least for answering anthropic/decision problems.