I deleted my previous reply since it seems unnecessary given your ETA.
I’m pretty sure there are cases more complicated than this perfectly amnesiac driver where that would be the only correct policy. (ETA:To be more specific, cases where the planning-optimal solution is not a sequential equilibrium).
What would be the only correct policy? What I wrote after “According to my reading of that paragraph”? If so, I don’t understand your “cases where the planning-optimal solution is not a sequential equilibrium”. Please explain.
What would be the only correct policy? What I wrote after “According to my reading of that paragraph”?
Yes.
If so, I don’t understand your “cases where the planning-optimal solution is not a sequential equilibrium”. Please explain.
I would have thought it would be self explanatory.
It looks like I will need to construct and analyze examples slightly more complicated that the Absent Minded Driver. That may take a while. Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
Wei has described a couple versions of UDT. His descriptions seemed to me to be mathematically rigorous. Based on Wei’s posts, I wrote this pdf, which gives just the definition of a UDT agent (as I understand it), without motivation or justification.
The difficulty with multiple agents looks like it will be very hard to get around within the UDT framework. UDT works essentially by passing the buck to an agent who is at the planning stage*. That planning-stage agent then performs a conventional expected-utility calculation.
But some scenarios seem best described by saying that there are multiple planning-stage agents. That means that UDT is subject to all of the usual difficulties that arise when you try to use expected utility alone in multiplayer games (e.g., prisoners dilemma). It’s just that these difficulties arise at the planning stage instead of at the action stage directly.
*Somewhat more accurately, the buck is passed to the UDT agent’s simulation of an agent who is at the planning stage.
What I meant was, what point were you trying to make with that statement? According to Aumann’s paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution. (My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct. That seems to disprove your “only correct policy” claim. I thought your “sequential equilibrium” line was trying to preempt this argument, but I can’t see how.
Does UDT encompass game theory, or is it limited to analyzing single-player situations?
Pretty much single-player for now. A number of people are trying to extend the ideas to multi-player situations, but it looks really hard.
Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
No, it’s not being written up further. (Nesov is writing up some of his ideas, which are meant to be an advance over UDT.)
What I meant was, what point were you trying to make with that statement? According to Aumann’s paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution.
My understanding of their paper has changed somewhat since we began this discussion. I now believe that repeating the planning-optimal analysis at every decision node is only guaranteed to give ideal results in simple cases like this one in which every decision point is in the same information set. In more complicated cases,
I can imagine that the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better. I would need to construct an example to assert this with confidence.
(My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct.
In this simple example, yes. Perhaps not in more complicated cases.
That seems to disprove your “only correct policy” claim. I thought your “sequential equilibrium” line was trying to preempt this argument, but I can’t see how.
And I can’t see how to explain it without an example
While I wait, did you see anything in Aumann’s paper that hints at “the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better”? Or is that your original research (to use Wikipedia-speak)? It occurs to me that if you’re correct about that, the authors of the paper should have realized it themselves and mentioned it somewhere, since it greatly strengthens their position.
Answering that is a bit tricky. If I am wrong, it is certainly “original research”. But my belief is based upon readings in game theory (including stuff by Aumann) which are not explicitly contained in that paper.
Please bear with me. I have a multi-player example in mind, but I hope to be able to find a single-player one which makes the reasoning clearer.
Regarding your last sentence, I must point out that the whole reason we are having this discussion is my claim to the effect that you don’t really understand their position, and hence cannot judge what does or does not strengthen it.
Ok, I now have at least a sketch of an example. I haven’t worked it out in detail, so I may be wrong, but here is what I think. In any scenario in which you gain and act on information after the planning stage, you should not use a recalculated planning-stage solution for any decisions after you have acted upon that information. Instead, you need to do the action-optimal analysis.
For example, let us complicate the absent-minded driver scenario that you diagrammed by adding an information-receipt and decision node prior to those two identical intersections. The driver comes in from the west and arrives at a T intersection where he can turn left(north) or right(south). At the intersection is a billboard advertising today’s lunch menu at Casa de Maria, his favorite restaurant. If the billboard promotes chile, he will want to turn right so as to have a good chance of reaching Maria’s for lunch. But if the billboard promotes enchiladas, which he dislikes, he probably wants to turn the other way and try for Marcello’s Pizza. Whether he turns right or left at the billboard, he will face two consecutive identical intersections (four identical intersections total). The day is cloudy, so he cannot tell whether he is traveling north or south.
Working this example in detail will take some work. Let me know if you think the work is necessary.
I deleted my previous reply since it seems unnecessary given your ETA.
What would be the only correct policy? What I wrote after “According to my reading of that paragraph”? If so, I don’t understand your “cases where the planning-optimal solution is not a sequential equilibrium”. Please explain.
Yes.
I would have thought it would be self explanatory.
It looks like I will need to construct and analyze examples slightly more complicated that the Absent Minded Driver. That may take a while. Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
Wei has described a couple versions of UDT. His descriptions seemed to me to be mathematically rigorous. Based on Wei’s posts, I wrote this pdf, which gives just the definition of a UDT agent (as I understand it), without motivation or justification.
The difficulty with multiple agents looks like it will be very hard to get around within the UDT framework. UDT works essentially by passing the buck to an agent who is at the planning stage*. That planning-stage agent then performs a conventional expected-utility calculation.
But some scenarios seem best described by saying that there are multiple planning-stage agents. That means that UDT is subject to all of the usual difficulties that arise when you try to use expected utility alone in multiplayer games (e.g., prisoners dilemma). It’s just that these difficulties arise at the planning stage instead of at the action stage directly.
*Somewhat more accurately, the buck is passed to the UDT agent’s simulation of an agent who is at the planning stage.
What I meant was, what point were you trying to make with that statement? According to Aumann’s paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution. (My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct. That seems to disprove your “only correct policy” claim. I thought your “sequential equilibrium” line was trying to preempt this argument, but I can’t see how.
Pretty much single-player for now. A number of people are trying to extend the ideas to multi-player situations, but it looks really hard.
No, it’s not being written up further. (Nesov is writing up some of his ideas, which are meant to be an advance over UDT.)
My understanding of their paper has changed somewhat since we began this discussion. I now believe that repeating the planning-optimal analysis at every decision node is only guaranteed to give ideal results in simple cases like this one in which every decision point is in the same information set. In more complicated cases, I can imagine that the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better. I would need to construct an example to assert this with confidence.
In this simple example, yes. Perhaps not in more complicated cases.
And I can’t see how to explain it without an example
While I wait, did you see anything in Aumann’s paper that hints at “the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better”? Or is that your original research (to use Wikipedia-speak)? It occurs to me that if you’re correct about that, the authors of the paper should have realized it themselves and mentioned it somewhere, since it greatly strengthens their position.
Answering that is a bit tricky. If I am wrong, it is certainly “original research”. But my belief is based upon readings in game theory (including stuff by Aumann) which are not explicitly contained in that paper.
Please bear with me. I have a multi-player example in mind, but I hope to be able to find a single-player one which makes the reasoning clearer.
Regarding your last sentence, I must point out that the whole reason we are having this discussion is my claim to the effect that you don’t really understand their position, and hence cannot judge what does or does not strengthen it.
Ok, I now have at least a sketch of an example. I haven’t worked it out in detail, so I may be wrong, but here is what I think. In any scenario in which you gain and act on information after the planning stage, you should not use a recalculated planning-stage solution for any decisions after you have acted upon that information. Instead, you need to do the action-optimal analysis.
For example, let us complicate the absent-minded driver scenario that you diagrammed by adding an information-receipt and decision node prior to those two identical intersections. The driver comes in from the west and arrives at a T intersection where he can turn left(north) or right(south). At the intersection is a billboard advertising today’s lunch menu at Casa de Maria, his favorite restaurant. If the billboard promotes chile, he will want to turn right so as to have a good chance of reaching Maria’s for lunch. But if the billboard promotes enchiladas, which he dislikes, he probably wants to turn the other way and try for Marcello’s Pizza. Whether he turns right or left at the billboard, he will face two consecutive identical intersections (four identical intersections total). The day is cloudy, so he cannot tell whether he is traveling north or south.
Working this example in detail will take some work. Let me know if you think the work is necessary.
Ok, I see. I’ll await your example.