I don’t actually have a rigorous answer at the moment, but let me go into what I think of as the “two-fluid model of anthropics.”
The two “fluids” are indexical probability measure and anthropic measure. Indexical probability is “how likely you are to be a particular person”—it is determined by what you know about the world. Anthropic measure is magical reality fluid—it’s “how much you exist.” Or if we project into the future, your probability measure is how likely you are to see a certain outcome. Anthropic measure is how much that outcome will exist.
Usually these two measures correspond. We see things and things exist at about the same rate. But sometimes they diverge, and then we need a two-fluid model.
A simple example of this is the quantum suicide argument. The one says “no matter what, I’ll always have some chance of surviving. From my perspective, then, I’ll never die—after all, once I die, I stop having a perspective. So, let’s play high-stakes Russian roulette!” There are multiple ways to frame this mistake, but the relevant one here is that it substitutes what is seen (your probability that you died) for what is real (whether you actually died).
Another case where they diverge is making copies. If I make some identical copies of you, your probability that you are some particular copy should go down as I increase the number of copies. But you don’t exist any less as I make more copies of you—making copies doesn’t change your anthropic measure.
Two-fluid model decision-making algorithm: follow the strategy that maximizes the final expected utility (found using anthropic measure and causal structure of the problem) for whoever you think you are (found using indexical probability measure). This is basically UDT, slightly generalized.
But (and this is why I was wrong before), this problem is actually outside the scope of my usual two-fluid model. It doesn’t have a well-defined indexical probability (specifically which person you are, not just what situation you’re in) to work with. It depends on your decision. We’ll need to figure out the correct generalization of TDT to handle this.
Okay, so you choose as if you’re controlling the output of the logical node that causes your decision. Non-anthropically you can just say “calculate the causal effect of the different logical-node-outputs, then output the one that causes the best outcome.” But our generalization needs to be able to answer the question “best outcome for whom?” I would love to post this comment with this problem resolved, but it’s tricky and so I’ll have to think about it more / get other people to tell me the answer.
I don’t actually have a rigorous answer at the moment, but let me go into what I think of as the “two-fluid model of anthropics.”
The two “fluids” are indexical probability measure and anthropic measure. Indexical probability is “how likely you are to be a particular person”—it is determined by what you know about the world. Anthropic measure is magical reality fluid—it’s “how much you exist.” Or if we project into the future, your probability measure is how likely you are to see a certain outcome. Anthropic measure is how much that outcome will exist.
Usually these two measures correspond. We see things and things exist at about the same rate. But sometimes they diverge, and then we need a two-fluid model.
A simple example of this is the quantum suicide argument. The one says “no matter what, I’ll always have some chance of surviving. From my perspective, then, I’ll never die—after all, once I die, I stop having a perspective. So, let’s play high-stakes Russian roulette!” There are multiple ways to frame this mistake, but the relevant one here is that it substitutes what is seen (your probability that you died) for what is real (whether you actually died).
Another case where they diverge is making copies. If I make some identical copies of you, your probability that you are some particular copy should go down as I increase the number of copies. But you don’t exist any less as I make more copies of you—making copies doesn’t change your anthropic measure.
Two-fluid model decision-making algorithm: follow the strategy that maximizes the final expected utility (found using anthropic measure and causal structure of the problem) for whoever you think you are (found using indexical probability measure). This is basically UDT, slightly generalized.
But (and this is why I was wrong before), this problem is actually outside the scope of my usual two-fluid model. It doesn’t have a well-defined indexical probability (specifically which person you are, not just what situation you’re in) to work with. It depends on your decision. We’ll need to figure out the correct generalization of TDT to handle this.
Okay, so you choose as if you’re controlling the output of the logical node that causes your decision. Non-anthropically you can just say “calculate the causal effect of the different logical-node-outputs, then output the one that causes the best outcome.” But our generalization needs to be able to answer the question “best outcome for whom?” I would love to post this comment with this problem resolved, but it’s tricky and so I’ll have to think about it more / get other people to tell me the answer.