Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief
But the whole question is about how Beauty should decide on her probabilities before seeing any evidence, right? What I’m saying is that she should do that with reference to her intended goals(or, just decide probabilities aren’t useful in this context)
I’m taking a behaviorist/decision-theoretic view on probability here—I’m saying that we can define an agent’s probability distribution over worlds in terms of its decision function and utility function. An agent definitionally believes an event will occur with probability p if it will sacrifice a resource worth <p utilons to get a certificate paying out 1 utilon if the event comes to pass.
I’d rather correctly guess whether I’m in a simulation and then take good actions anyhow.
But what does ‘correctly’ actually mean here? It can’t mean that we’ll eventually see clear signs of a simulation, as we’re specifically positing there’s no observable differences. Does it mean ‘the Solomonoff prior puts most of the weight for our experiences inside a simulation’? But we would only say this means ‘correctly’ because S.I. seems like a good abstraction of our normal sense of reality. But ‘UDT, with a utility function weighted by the complexity of the world’ seems like just as good of an abstraction, so it’s not clear why we should prefer one or the other. (Note the ‘effective probability’ derived from UDT is not the same as the complexity weighting)
I actually think there is an interesting duality here—within this framework, as moral actors agents are supposed to use UDT, but as moral patients they are weighted by Solomonoff probabilities. I suspect there’s an alternative theory of rationality that can better integrate these two aspects, but for now I feel like UDT is the more useful of the two, at least for answering anthropic/decision problems.
But the whole question is about how Beauty should decide on her probabilities before seeing any evidence, right? What I’m saying is that she should do that with reference to her intended goals(or, just decide probabilities aren’t useful in this context)
I’m taking a behaviorist/decision-theoretic view on probability here—I’m saying that we can define an agent’s probability distribution over worlds in terms of its decision function and utility function. An agent definitionally believes an event will occur with probability p if it will sacrifice a resource worth <p utilons to get a certificate paying out 1 utilon if the event comes to pass.
But what does ‘correctly’ actually mean here? It can’t mean that we’ll eventually see clear signs of a simulation, as we’re specifically positing there’s no observable differences. Does it mean ‘the Solomonoff prior puts most of the weight for our experiences inside a simulation’? But we would only say this means ‘correctly’ because S.I. seems like a good abstraction of our normal sense of reality. But ‘UDT, with a utility function weighted by the complexity of the world’ seems like just as good of an abstraction, so it’s not clear why we should prefer one or the other. (Note the ‘effective probability’ derived from UDT is not the same as the complexity weighting)
I actually think there is an interesting duality here—within this framework, as moral actors agents are supposed to use UDT, but as moral patients they are weighted by Solomonoff probabilities. I suspect there’s an alternative theory of rationality that can better integrate these two aspects, but for now I feel like UDT is the more useful of the two, at least for answering anthropic/decision problems.