As I mentioned elsewhere, I don’t really understand...
>I think (1) is a poor formalization, because the game tree becomes unreasonably huge
What game tree? Why represent these decision problems as any kind of trees or game trees in particular? At least some problems of this type can be represented efficiently, using various methods to represent functions on the unit simplex (including decision trees)… Also: Is this decision-theoretically relevant? That is, are you saying, a good decision theory doesn’t have to deal with 1 because it is cumbersome to write out (some) problems of this type? But *why* is this decision-theoretically relevant?
>some strategies of the predictor (like “fill the box unless the probability of two-boxing is exactly 1”) leave no optimal strategy for the player.
Well, there are less radical ways of addressing this. E.g., expected utility-type theories just assign a preference order to the set of available actions. We could be content with that and accept that in some cases, there is no optimal action. As long as our decision theory ranks the available options in the right order… Or we could restrict attention to problems where an optimal strategy exists despite this dependence.
>And (3) seems like a poor formalization because it makes the predictor work too hard. Now it must predict all possible sources of randomness you might use, not just your internal decision-making.
For this reason, I always assume that predictors in my Newcomb-like problems are compensated appropriately and don’t work on weekends! Seriously, though: what does “too hard” mean here? Is this just the point that it is in practice easy to construct agents that cannot be realistically predicted in this way when they don’t want to be predicted? If so: I find that at least somewhat convincing, though I’d still be interested in developing theory that doesn’t hinge on this ability.
I guess I just like game theory. “Alice chooses a box and Bob predicts her action” can be viewed as a game with Alice and Bob as players, or with only Alice as player and Bob as the shape of the game tree, but in any case it seems that option (2) from the post leads to games where solutions/equilibria always exist, while (1) doesn’t. Also see my other comment about amnesia, it’s basically the same argument. It’s fine if it’s not a strong argument for you.
As I mentioned elsewhere, I don’t really understand...
>I think (1) is a poor formalization, because the game tree becomes unreasonably huge
What game tree? Why represent these decision problems as any kind of trees or game trees in particular? At least some problems of this type can be represented efficiently, using various methods to represent functions on the unit simplex (including decision trees)… Also: Is this decision-theoretically relevant? That is, are you saying, a good decision theory doesn’t have to deal with 1 because it is cumbersome to write out (some) problems of this type? But *why* is this decision-theoretically relevant?
>some strategies of the predictor (like “fill the box unless the probability of two-boxing is exactly 1”) leave no optimal strategy for the player.
Well, there are less radical ways of addressing this. E.g., expected utility-type theories just assign a preference order to the set of available actions. We could be content with that and accept that in some cases, there is no optimal action. As long as our decision theory ranks the available options in the right order… Or we could restrict attention to problems where an optimal strategy exists despite this dependence.
>And (3) seems like a poor formalization because it makes the predictor work too hard. Now it must predict all possible sources of randomness you might use, not just your internal decision-making.
For this reason, I always assume that predictors in my Newcomb-like problems are compensated appropriately and don’t work on weekends! Seriously, though: what does “too hard” mean here? Is this just the point that it is in practice easy to construct agents that cannot be realistically predicted in this way when they don’t want to be predicted? If so: I find that at least somewhat convincing, though I’d still be interested in developing theory that doesn’t hinge on this ability.
I guess I just like game theory. “Alice chooses a box and Bob predicts her action” can be viewed as a game with Alice and Bob as players, or with only Alice as player and Bob as the shape of the game tree, but in any case it seems that option (2) from the post leads to games where solutions/equilibria always exist, while (1) doesn’t. Also see my other comment about amnesia, it’s basically the same argument. It’s fine if it’s not a strong argument for you.