When a problem involves a predictor that’s predicting your actions, it can often be transformed into another problem that has an indistinguishable copy of you inside the predictor. In some cases, like Counterfactual Mugging, the copy and the original can even receive different evidence, though they are still unable to tell which is which.
There are more complicated scenarios, where the predictor is doing high-level logical reasoning about you instead of running a simulation of you. In simple cases like Newcomb’s Problem, that distinction doesn’t matter, but there is an important family of problems where it matters. The earliest known example is Gary Drescher’s Agent Simulates Predictor. Other examples are Wei Dai’s problem about bargaining and logical uncertainty and my own problem about logical priors. Right now this is the branch of decision theory that interests me most.
Can you explain this equivalence?
When a problem involves a predictor that’s predicting your actions, it can often be transformed into another problem that has an indistinguishable copy of you inside the predictor. In some cases, like Counterfactual Mugging, the copy and the original can even receive different evidence, though they are still unable to tell which is which.
There are more complicated scenarios, where the predictor is doing high-level logical reasoning about you instead of running a simulation of you. In simple cases like Newcomb’s Problem, that distinction doesn’t matter, but there is an important family of problems where it matters. The earliest known example is Gary Drescher’s Agent Simulates Predictor. Other examples are Wei Dai’s problem about bargaining and logical uncertainty and my own problem about logical priors. Right now this is the branch of decision theory that interests me most.