Why is your view not easily summarized? From what I see, the solution satisfying all of the requirements looks rather simple, without even any need to define causality and the like. I may write it up at some point in the following months, after some running confusions (not crucial to the main point) are resolved.
Basically, all the local decisions come from the same computation that would be performed to set the most general precommitment for all possible states of the world. The expected utility maximization is defined only once, on the global state space, and then the actual actions only retrieve the global solution, given encountered observations. The observations don’t change the state space over which the expected utility optimization is defined (and don’t change the optimal global solution or preference order on the global solutions), only what the decisions in a given (counterfactual) branch can affect. Since the global precommitment is the only thing that defines the local agents’ decisions, the “commitment” part can be dropped, and the agents’ actions can just be defined to follow the resulting preference order.
I admit, it’d take some work to write that up understandably, but it doesn’t seem to involve difficult technical issues.
I think your summary is understandable enough, but I don’t agree that observations should never change the optimal global solution or preference order on the global solutions, because observations can tell you which observer you are in the world, and different observers can have different utility functions. See my counter-example in a separate comment at http://lesswrong.com/lw/90/newcombs_problem_standard_positions/5u4#comments.
From the global point of view, you only consider different possible experiences, that imply different possible situations. Nothing changes, because everything is determined from the global viewpoint. If you want to determine certain decisions in response to certain possible observations, you also specify that globally, and set it in stone. Whatever happens to you, you can (mathematically speaking) consider that in advance, as an input sequence to your cognitive algorithm, and prepare the plan of action in response. The fact that you participate in a certain mind-copying experiment is also data to which you respond in a certain way.
This is of course not for human beings, this is for something holding much stronger to reflective consistency. And in that setting changing preferences is unacceptable.
Why is your view not easily summarized? From what I see, the solution satisfying all of the requirements looks rather simple, without even any need to define causality and the like. I may write it up at some point in the following months, after some running confusions (not crucial to the main point) are resolved.
Basically, all the local decisions come from the same computation that would be performed to set the most general precommitment for all possible states of the world. The expected utility maximization is defined only once, on the global state space, and then the actual actions only retrieve the global solution, given encountered observations. The observations don’t change the state space over which the expected utility optimization is defined (and don’t change the optimal global solution or preference order on the global solutions), only what the decisions in a given (counterfactual) branch can affect. Since the global precommitment is the only thing that defines the local agents’ decisions, the “commitment” part can be dropped, and the agents’ actions can just be defined to follow the resulting preference order.
I admit, it’d take some work to write that up understandably, but it doesn’t seem to involve difficult technical issues.
I think your summary is understandable enough, but I don’t agree that observations should never change the optimal global solution or preference order on the global solutions, because observations can tell you which observer you are in the world, and different observers can have different utility functions. See my counter-example in a separate comment at http://lesswrong.com/lw/90/newcombs_problem_standard_positions/5u4#comments.
From the global point of view, you only consider different possible experiences, that imply different possible situations. Nothing changes, because everything is determined from the global viewpoint. If you want to determine certain decisions in response to certain possible observations, you also specify that globally, and set it in stone. Whatever happens to you, you can (mathematically speaking) consider that in advance, as an input sequence to your cognitive algorithm, and prepare the plan of action in response. The fact that you participate in a certain mind-copying experiment is also data to which you respond in a certain way.
This is of course not for human beings, this is for something holding much stronger to reflective consistency. And in that setting changing preferences is unacceptable.