It seems that my communication attempt failed badly last time, so let me try again. The “standard” approach to indexicals is to treat indexical uncertainty the same as any other kind of uncertainty. You compute a probability of being at each location, and then maximize expected utility. I tried to point out in this post that because decisions made at each location can interact non-linearly, this doesn’t work.
You transformed my example into a game theory example, and the paradox disappeared, because game theory does take into account interactions between different players. Notice that in your game theory example, the computation that arrives at the solution looks nothing like an expected utility maximization involving probabilities of being at different locations. The probability of being at a location doesn’t enter into the decision algorithm at all, so do such probabilities mean anything?
You compute a probability of being at each location, and then maximize expected utility. I tried to point out in this post that because decisions made at each location can interact non-linearly, this doesn’t work.
How does it not work?
If you are at a different location, that’s a different world state. You compute the utility for each world state separately. Problem solved.
And to the folks who keep voting me down when I point out basically the same solution: State why you disagree. You’ve already taken 3 karma for me. Don’t just keep taking karma for the same thing over and over without explaining why.
The same world does not contain two copies of you. You are confused about the meaning of “you”.
Treat each of these two entities just the same way you treat every other agent in the world. If they are truly identical, it doesn’t matter which one is “you”.
Yes, they do. In this case you just got lucky and the probabilities factored out of the calculations. The general case where they don’t necessarily factor out is called evolutionary game theory: indexical probabilities correspond to replicator frequencies, utility corresponds to fitness.
I need to brush up on evolutionary game theory, but I don’t see the correspondence between these two subjects yet. Can you take a standard puzzle involving indexical uncertainty, for example the Sleeping Beauty Problem, and show how to solve it using evolutionary game theory?
Hmm, I don’t see any problem in that scenario. It doesn’t even require game theory because the different branches don’t interact. Whatever monetary rewards you assign to correct/incorrect answers, the problem will be easy to solve by simple expected utility maximization.
Hmm, I don’t see any problem in that scenario. It doesn’t even require game theory because the different branches don’t interact. Whatever monetary rewards you assign to correct/incorrect answers, the problem will be easy to solve by simple expected utility maximization.
Consider two players as two concurrent processes: each can make any of three decisions. If you consider their decisions separately, it’s total of 9 options, and the state space that you construct to analyze them will contain 9 elements. Reasoning with uncertainty can then consider events on this state space, and preference is free to define prior+utility for the 9 elements in any way.
But consider another way of treating this situation: instead of 9 elements in the state space, let’s introduce only 6: 3 for the first player’s decision and 3 for the second player’s. Now, the joint decision of our players is represented not by one element of the state space as in the first case, but by a pair of elements, one from each triple. The options for choosing prior+utility, and hence preference, are more limited for this state space.
In the first case, it’s unclear what could the probability of being one of the players mean: each element of the state space corresponds to both players. In the second case, it’s easy: just take the total measure of each triple.
When the decisions are dependent, the second way of treating this situation can fail, and the expressive power of expected utility become insufficient to express resulting preference.
There is an interesting extension to the question of whether indexical probability is always meaningful: is the probability of ordinary observations, even in a deterministic world, meaningful? I’m not sure it is. When you solve the decision problem, you consider preference over strategies, and a strategy includes the instructions for what to do given either observation. In the space of all possible strategies, each point considers all branches at each potential observation, just like in the example with triples of decisions above, where all 9 elements of the state space describe the decisions of both players. There doesn’t seem to be a natural way to define probability of each of the possible observations at a given observation point, starting from a distribution representing preference over possible strategies.
In the case of probability of ordinary observations, I think you can assign probabilities if your preferences over possible strategies satisfy some conditions, the major one being what you prefer to happen in one branch has to be independent of what you prefer to happen in another branch, i.e., the Axiom of Independence. If we ignore counterfactual-mugging type considerations, do you see any problems with this? If so can you give an example?
This is exactly the difference that allows to have the 6-element state space, as in the example with indexical uncertainty above, instead of more general 9-element state space. You place the possibilities in one branch by the side with the possibilities in the other branch, instead of considering all possible combinations of possibilities. It’s easy to represent various situations for which you assign probability as alternatives, lying side by side in the state space: the alternatives in different possible worlds, or counterfactuals, as they never “interact”, seem to be right to model by just considering as options, independently. The same for two physical systems that don’t interact with each other: what’s the difference between that and being in different possible worlds? - And a special case of this situation is indexical uncertainty. One condition for doing it without problem is independence. But independence isn’t really true, it’s approximation.
It’s trivial to set up the situations equivalent to counterfactual mugging, if the participants are computer programs that don’t run very far. It’s possible to prove things about where a program can go, and perform actions depending on the conclusion. What do you do then? I don’t know yet, your comment brought the idea of meaninglessness of probability of ordinary observations just yesterday, before that I didn’t notice this issue. Maybe I’ll finally find a situation where prior+utility isn’t an adequate way of representing preference, or maybe there is a good way of lifting probability of observations to probability of strategies.
It seems that my communication attempt failed badly last time, so let me try again. The “standard” approach to indexicals is to treat indexical uncertainty the same as any other kind of uncertainty. You compute a probability of being at each location, and then maximize expected utility. I tried to point out in this post that because decisions made at each location can interact non-linearly, this doesn’t work.
You transformed my example into a game theory example, and the paradox disappeared, because game theory does take into account interactions between different players. Notice that in your game theory example, the computation that arrives at the solution looks nothing like an expected utility maximization involving probabilities of being at different locations. The probability of being at a location doesn’t enter into the decision algorithm at all, so do such probabilities mean anything?
How does it not work?
If you are at a different location, that’s a different world state. You compute the utility for each world state separately. Problem solved.
And to the folks who keep voting me down when I point out basically the same solution: State why you disagree. You’ve already taken 3 karma for me. Don’t just keep taking karma for the same thing over and over without explaining why.
If the same world contains two copies of you, you can be either copy within the same world.
The same world does not contain two copies of you. You are confused about the meaning of “you”.
Treat each of these two entities just the same way you treat every other agent in the world. If they are truly identical, it doesn’t matter which one is “you”.
Yes, they do. In this case you just got lucky and the probabilities factored out of the calculations. The general case where they don’t necessarily factor out is called evolutionary game theory: indexical probabilities correspond to replicator frequencies, utility corresponds to fitness.
I need to brush up on evolutionary game theory, but I don’t see the correspondence between these two subjects yet. Can you take a standard puzzle involving indexical uncertainty, for example the Sleeping Beauty Problem, and show how to solve it using evolutionary game theory?
Hmm, I don’t see any problem in that scenario. It doesn’t even require game theory because the different branches don’t interact. Whatever monetary rewards you assign to correct/incorrect answers, the problem will be easy to solve by simple expected utility maximization.
Hmm, I don’t see any problem in that scenario. It doesn’t even require game theory because the different branches don’t interact. Whatever monetary rewards you assign to correct/incorrect answers, the problem will be easy to solve by simple expected utility maximization.
Consider two players as two concurrent processes: each can make any of three decisions. If you consider their decisions separately, it’s total of 9 options, and the state space that you construct to analyze them will contain 9 elements. Reasoning with uncertainty can then consider events on this state space, and preference is free to define prior+utility for the 9 elements in any way.
But consider another way of treating this situation: instead of 9 elements in the state space, let’s introduce only 6: 3 for the first player’s decision and 3 for the second player’s. Now, the joint decision of our players is represented not by one element of the state space as in the first case, but by a pair of elements, one from each triple. The options for choosing prior+utility, and hence preference, are more limited for this state space.
In the first case, it’s unclear what could the probability of being one of the players mean: each element of the state space corresponds to both players. In the second case, it’s easy: just take the total measure of each triple.
When the decisions are dependent, the second way of treating this situation can fail, and the expressive power of expected utility become insufficient to express resulting preference.
There is an interesting extension to the question of whether indexical probability is always meaningful: is the probability of ordinary observations, even in a deterministic world, meaningful? I’m not sure it is. When you solve the decision problem, you consider preference over strategies, and a strategy includes the instructions for what to do given either observation. In the space of all possible strategies, each point considers all branches at each potential observation, just like in the example with triples of decisions above, where all 9 elements of the state space describe the decisions of both players. There doesn’t seem to be a natural way to define probability of each of the possible observations at a given observation point, starting from a distribution representing preference over possible strategies.
In the case of probability of ordinary observations, I think you can assign probabilities if your preferences over possible strategies satisfy some conditions, the major one being what you prefer to happen in one branch has to be independent of what you prefer to happen in another branch, i.e., the Axiom of Independence. If we ignore counterfactual-mugging type considerations, do you see any problems with this? If so can you give an example?
This is exactly the difference that allows to have the 6-element state space, as in the example with indexical uncertainty above, instead of more general 9-element state space. You place the possibilities in one branch by the side with the possibilities in the other branch, instead of considering all possible combinations of possibilities. It’s easy to represent various situations for which you assign probability as alternatives, lying side by side in the state space: the alternatives in different possible worlds, or counterfactuals, as they never “interact”, seem to be right to model by just considering as options, independently. The same for two physical systems that don’t interact with each other: what’s the difference between that and being in different possible worlds? - And a special case of this situation is indexical uncertainty. One condition for doing it without problem is independence. But independence isn’t really true, it’s approximation.
It’s trivial to set up the situations equivalent to counterfactual mugging, if the participants are computer programs that don’t run very far. It’s possible to prove things about where a program can go, and perform actions depending on the conclusion. What do you do then? I don’t know yet, your comment brought the idea of meaninglessness of probability of ordinary observations just yesterday, before that I didn’t notice this issue. Maybe I’ll finally find a situation where prior+utility isn’t an adequate way of representing preference, or maybe there is a good way of lifting probability of observations to probability of strategies.