I don’t understand how this helps. It doesn’t seem to allow anything I couldn’t do before. Is it just that you find it easier to justify to yourself substituting the decision of the enemy for your own than the decision you would precommit to for your current one?
It doesn’t seem to allow anything I couldn’t do before.
Yes, basically. This is “secretly” just a different way of looking at UDT, and this particular way is easy to get to from a standard game-theoretic starting point, but harder to get to from a “rationality is what wins” starting point.
Given that the non-anthropic problem is interesting because it introduces tension between these two viewpoints (sorta), this trick is interesting because it reduces that tension.
Manfred could answer better, but I think this trick is designed to help with point of view.
The problem with anthropic problems is that you aren’t sure which you is you. There’s all sorts of branches that occur, and you don’t know which branch you’re on. You’re trying your damnedest to look backwards up the branching probability tree and hoping you don’t lose track of any branches.
By pretending you’re the researcher, you’re looking at possible branching futures the other way. You always have a frame of reference that doesn’t change subjectively, and doesn’t need updates. At least, that’s how I think it’s supposed to work.
The helpfulness described here is this: The mathematics are simpler. [Xachariah’s response explains why.]
Explanations for decision trees can also be simpler. Newcomblike problems become almost trivial to consider from Omega’s perspective, for example, even in the counterfactual mugging case.
I can do all the same mathematics without creating an imaginary enemy. The only thing that is changing here is how I choose to describe the mathematics in question to myself. This evidently allows Manfred to feel comfortable doing specific mathematics that he would not be comfortable doing without describing it in terms of a contrived enemy’s perspective.
I don’t understand how this helps. It doesn’t seem to allow anything I couldn’t do before. Is it just that you find it easier to justify to yourself substituting the decision of the enemy for your own than the decision you would precommit to for your current one?
Yes, basically. This is “secretly” just a different way of looking at UDT, and this particular way is easy to get to from a standard game-theoretic starting point, but harder to get to from a “rationality is what wins” starting point.
Given that the non-anthropic problem is interesting because it introduces tension between these two viewpoints (sorta), this trick is interesting because it reduces that tension.
Given this framing I like it!
Yay!
Manfred could answer better, but I think this trick is designed to help with point of view.
The problem with anthropic problems is that you aren’t sure which you is you. There’s all sorts of branches that occur, and you don’t know which branch you’re on. You’re trying your damnedest to look backwards up the branching probability tree and hoping you don’t lose track of any branches.
By pretending you’re the researcher, you’re looking at possible branching futures the other way. You always have a frame of reference that doesn’t change subjectively, and doesn’t need updates. At least, that’s how I think it’s supposed to work.
The helpfulness described here is this: The mathematics are simpler. [Xachariah’s response explains why.]
Explanations for decision trees can also be simpler. Newcomblike problems become almost trivial to consider from Omega’s perspective, for example, even in the counterfactual mugging case.
I can do all the same mathematics without creating an imaginary enemy. The only thing that is changing here is how I choose to describe the mathematics in question to myself. This evidently allows Manfred to feel comfortable doing specific mathematics that he would not be comfortable doing without describing it in terms of a contrived enemy’s perspective.