Ok, I’ve read the paper(most of it) and Wei-Dai’s article now. Two points.
In a sense, I understand how you might think that the Absent Minded Driver is no less contrived and unrealistic than Newcomb’s Paradox. Maybe different people have different intuitions as to what toy examples are informative and which are misleading. Someone else (on this thread?) responded to me recently with the example of frictionless pulleys and the like from physics. All I can tell you is that my intuition tells me that the AMD, the PD, frictionless pulleys,and even Parfit’s Hitchhiker all strike me as admirable teaching tools, whereas Newcomb problems and the old questions of irrestable force vs immovable object in physics are simply wrong problems which can only create confusion.
Reading Wei-Dai’s snarking about how the LW approach to decision theory (with zero published papers to date) is so superior to the confusion in which mere misguided Nobel laureates struggle—well, I almost threw up. It is extremely doubtful that I will continue posting here for long.
It wasn’t meant to be a snark. I was genuinely trying to figure out how the “LW approach” might be superior, because otherwise the most likely explanation is that we’re all deluded in thinking that we’re making progress. I’d be happy to take any suggestions on how I could have reworded my post so that it sounded less like a snark.
Wei-Dai wrote a post entitled The Absent-Minded Driver which I labeled “snarky”. Moreover, I suggested that the snarkiness was so bad as to be nauseating, so as to drive reasonable people to flee in horror from LW and SAIA. I here attempt to defend these rather startling opinions. Here is what Wei-Dai wrote that offended me:
This post examines an attempt by professional decision theorists to treat an example of time inconsistency, and asks why they failed to reach the solution (i.e., TDT/UDT) that this community has more or less converged upon. (Another aim is to introduce this example, which some of us may not be familiar with.) Before I begin, I should note that I don’t think “people are crazy, the world is mad” (as Eliezer puts it) is a good explanation. Maybe people are crazy, but unless we can understand how and why people are crazy (or to put it more diplomatically, “make mistakes”), how can we know that we’re not being crazy in the same way or making the same kind of mistakes?
The paper that Wei-Dai reviews is “The Absent-Minded Driver” by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:
(Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don’t think we want to call these people “crazy”.)
Wei-Dai then proceeds to give a competent description of the problem and the standard “planning-optimality” solution of the problem. Next comes a description of an alternative seductive-but-wrong solution by Piccione and Rubinstein. I should point that everyone—P&R, Aumann, Hart, and Perry, Wei-Dai, me, and hopefully you who look into this—realizes that the alternative P&R solution is wrong. It gets the wrong result. It doesn’t win. The only problem is explaining exactly where the analysis leading to that solution went astray, and in explaining how it might be modified so as to go right. Making this analysis was, as I see it, the whole point of both papers—P&R and Aumann et al. Wei-Dai describes some characteristics of Aumann et al’s corrected version of the alternate solution. Then he (?) goes horribly astray:
In problems like this one, UDT is essentially equivalent to planning-optimality. So why did the authors propose and argue for action-optimality despite its downsides …, instead of the alternative solution of simply remembering or recomputing the planning-optimal decision at each intersection and carrying it out?
But, as anyone who reads the paper carefully should see, they weren’t arguing for action-optimality as the solution. They never abandoned planning optimality. Their point is that if you insist on reasoning in this way, (and Seldin’s notion of “subgame perfection” suggests some reasons why you might!) then the algorithm they call “action-optimality” is the way to go about it.
But Wei-Dai doesn’t get this. Instead we get this analysis of how these brilliant people just haven’t had the educational advantages that LW folks have:
Well, the authors don’t say (they never bothered to argue against it), but I’m going to venture some guesses:
That solution is too simple and obvious, and you can’t publish a paper arguing for it.
It disregards “the probability of being at X”, which intuitively ought to play a role.
The authors were trying to figure out what is rational for human beings, and that solution seems too alien for us to accept and/or put into practice.
The authors were not thinking in terms of an AI, which can modify itself to use whatever decision theory it wants to.
Aumann is known for his work in game theory. The action-optimality solution looks particularly game-theory like, and perhaps appeared more natural than it really is because of his specialized knowledge base.
The authors were trying to solve one particular case of time inconsistency. They didn’t have all known instances of time/dynamic/reflective inconsistencies/paradoxes/puzzles laid out in front of them, to be solved in one fell swoop.
Taken together, these guesses perhaps suffice to explain the behavior of these professional rationalists, without needing to hypothesize that they are “crazy”. Indeed, many of us are probably still not fully convinced by UDT for one or more of the above reasons.
Let me just point out that the reason it is true that “they never argued against it” is that they had already argued for it. Check out the implications of their footnote #4!
Ok, those are the facts, as I see them. Was Wei-Dai snarky? I suppose it depends on how you define snarkiness. Taboo “snarky”. I think that he was overbearingly condescending without the slightest real reason for thinking himself superior. “Snarky” may not be the best one-word encapsulation of that attitude, but it is the one I chose. I am unapologetic. Wei-Dai somehow came to believe himself better able to see the truth than a Nobel laureate in the Nobel laureate’s field. It is a mistake he would not have made had he simply read a textbook or taken a one-semester course in the field. But I’m coming to see it as a mistake made frequently by SIAI insiders.
Let me point out that the problem of forgetful agents may seem artificial, but it is actually extremely important. An agent with perfect recall playing the iterated PD, knowing that it is to be repeated exactly 100 times, should rationally choose to defect. On the other hand, if he cannot remember how many iterations remain to be played, and knows that the other player cannot remember either, should cooperate by playing Tit-for-Tat or something similar.
Well, that is my considered response on “snarkiness”. I still have to respond on some other points, and I suspect that, upon consideration, I am going to have to eat some crow. But I’m not backing down on this narrow point. Wei-Dai blew it in interpreting Aumann’s paper. (And also, other people who know some game theory should read the paper and savor the implications of footnote #4. It is totally cool).
The paper that Wei-Dai reviews is “The Absent-Minded Driver” by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:
(Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don’t think we want to call these people “crazy”.)
How is Wei Dai being condescending there? He’s pointing out how weak it is to dismiss people with these credentials by just calling them crazy. ETA: In other words, it’s an admonishment directed at LWers.
I’m sure it would be Wei-Dai’s read as well. The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary. I’m not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.
No. Not at all. It is because he disagreed through the wrong channels, and then proceeded to propose rather insulting hypotheses as to why they had gotten it wrong.
Just read that list of possible reasons! And there are people here arguing that “of course we want to analyze the cause of mistakes”. Sheesh. No wonder folks here are so in love with Evolutionary Psychology.
Ok, I’m probably going to get downvoted to hell because of that last paragraph. And,
you know what, that downvoting impulse due to that paragraph pretty much makes my case for why Wei Dai was wrong to do what he did. Think about it.
Ok, I’m probably going to get downvoted to hell because of that last paragraph. And, you know what, that downvoting impulse due to that paragraph pretty much makes my case for why Wei Dai was wrong to do what he did. Think about it.
Interestingly enough I think that it is this paragraph that people will downvote, and not the one above. Mind you, the premise in “No wonder folks here are so in love with Evolutionary Psychology.” does seem so incredibly backward that I almost laughed.
No. Not at all. It is because he disagreed through the wrong channels, and then proceeded to propose rather insulting hypotheses as to why they had gotten it wrong.
I can understand your explanation here. Without agreeing with it myself I can see how it follows from your premises.
Are you saying that you read him differently, and that he would somehow be misinterpreting himself?
The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary.
The admonishment is necessary if LWers are likely to wrongly dismiss Aumann et al. as “crazy”. In other words, to think that the admonishment is necessary is to think that LWers are too inclined to dismiss other people as crazy
I’m not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.
I got that. Who said anything about condescending to LWers?
Are you saying that you read him differently, and that he would somehow be misinterpreting himself?
Huh?? Surely, you troll. I am saying that Wei-Dai’s read would likely be the same as yours: that he was not condescending; that he was in fact cautioning his readers against looking down on the poor misguided Nobelists who, after all, probably had good reasons for being so mistaken. There, but for the grace of EY, go we.
Condescension is a combination of content and context. When you isolated that quote as especially condescending, I thought that you read something within it that was condescending. I was confused, because the quote could just as well have come from a post arguing that LWers ought to believe that Aumann et al. are right.
It now looks like you and I read the intrinsic meaning of the quote in the same way. The question then is, does that quote, placed in context, somehow make the overall post more condescending than it already was? Wei had already said that his treatment of the AMD was better than that of Aumann et al.. He had already said that these prestigious researchers got it wrong. Do you agree that if this were true, if the experts got it wrong, then we ought to try to understand how that happened, and not just dismiss them as crazy?
Whatever condescension occurred, it occurred as soon as Wei said that he was right and Aumann et al. were wrong. How can drawing a rational inference from that belief make it more condescending?
In this light I can see where ‘condescension’ fits in. There is a difference between ‘descending to be with’ and just plain ‘being way above’. For example we could label “they are wrong” as arrogant, “they are wrong but we can empathise with them and understand their mistake” as condescending and “They are wrong, that’s the kind of person Nobel prizes go to these days?” as “contemptuous”—even though they all operate from the same “I consider myself above in this instance” premise. Wei’s paragraph could then be considered to be transferring weight from arrogance and contempt into condescension.
(I still disapprove of Perplexed’s implied criticism.)
Okay, I can see this distinction. I can see how, as a matter of social convention, “they are wrong but we should understand their mistake” could come across as more condescending than just “they are wrong”. But I really don’t like that convention. If an expert is wrong, we really do have an obligation to understand how that happened. Accepting that obligation shouldn’t be stigmatized as condescending. (Not that you implied otherwise.)
the question then is, does that quote, placed in context, somehow make the overall post more condescending than it already was?
“They are probably not crazy” strikes me as “damning with faint praise”. IMHO, it definitely raises the overall condescension level.
Whatever condescension occurred, it occurred as soon as Wei said that he was right and Aumann et al. were wrong.
No. Peons claim lords are wrong all the time. It is not even impolite, if you are willing to admit your mistake and withdraw your claim reasonably quickly.
Condescension starts when you attempt to “charitably” analyze the source of the error.
Do you agree that if this were true, if the experts got it wrong, then we ought to try to understand how that happened, and not just dismiss them as crazy?
Of course. But if I merely had good reason to believe they were wrong, then my most urgent next step would be to determine whether it were true that they got it wrong. I would begin by communicating with the experts, either privately or through the peer-reviewed literature, so as to get some feedback as to whether they were wrong or I was mistaken. If it does indeed turn out that they were wrong, I would let them take the first shot at explaining the causes of their mistake. I doubt that I would try to analyze the cause of the mistake myself unless I were a trained historian dealing with a mistake at least 50 years old. Or, if I did try (and I probably have), I would hope that someone would point out my presumption.
Preliminary notes: You can call me “Wei Dai” (that’s firstname lastname). “He” is ok. I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole’s “Game Theory” and Joyce’s “Foundations of Causal Decision Theory” as two of the few physical books that I own.
Their point is that if you insist on reasoning in this way, (and Seldin’s notion of “subgame perfection” suggests some reasons why you might!) then the algorithm they call “action-optimality” is the way to go about it.
I can’t see where they made this point. At the top of Section 4, they say “How, then, should the driver reason at the action stage?” and go on directly to describe action-optimality. If they said something like “One possibility is to just recompute and apply the planning-optimal solution. But if you insist …” please point out where. See also page 108:
In our case, there is only one player, who acts at different times.
Because of his absent-mindedness, he had better coordinate his actions;
this coordination can take place only before he starts out}at the planning
stage. At that point, he should choose p*1 . If indeed he chose p*1 , there is
no problem. If by mistake he chose p*2 or p*3 , then that is what he should
do at the action stage. (If he chose something else, or nothing at all, then
at the action stage he will have some hard thinking to do.)
If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?
I also do not see how subgame perfection is relevant here. Can you explain?
Let me just point out that the reason it is true that “they never argued against it” is that they had already argued for it. Check out the implications of their footnote #4!
This footnote?
Formally, (p*, p*) is a symmetric Nash equilibrium in the (symmetric) game between ‘‘the driver at the current intersection’’ and ‘‘the driver at the other intersection’’ (the strategic form game with payoff functions h.)
Since p* is the action-optimal solution, they are pointing out the formal relationship between their notion of action-optimality and Nash equilibrium. How is this footnote an argument for “it” (it being “recomputing the planning-optimal decision at each intersection and carrying it out”)?
I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole’s “Game Theory” and Joyce’s “Foundations of Causal Decision Theory” as two of the few physical books that I own.
Ok, so it is me who is convicted of condescending without having the background to justify it. :( FWIW I have never taken a course, though I have been reading in the subject for more than 45 years.
Relevance of Subgame perfection. Seldin suggested subgame perfection as a refinement of Nash equilibrium which requires that decisions that seemed rational at the planning stage ought to still seem rational at the action stage. This at least suggests that we might want to consider requiring “subgame perfection” even if we only have a single player making two successive decisions.
Relevance of Footnote #4. This points out that one way to think of problems where a single player makes a series of decisions is to pretend that the problem has a series of players making the decisions—one decision per player, but that these fictitious players are linked in that they all share the same payoffs (but not necessarily the same information). This is a standard “trick” in game theory, but the footnote points out that in this case, since both fictitious players have the same information (because of the absent-mindedness) the game between driver-version-1 and driver-version-2 is symmetric, and that is equivalent to the constraint p1 = p2.
Does Footnote #4 really amount to “they had already argued for [just recalculating the planning-optimal solution]”? Well, no it doesn’t really. I blew it in offering that as evidence. (Still think it is cool, though!)
Do they “argue for it” anywhere else? Yes, they do. Section 5, where they apply their methods to a slightly more complicated example, is an extended argument for the superiority of the planning-optimal solution to the action-optimal solutions. As they explain, there can be multiple action-optimal solutions, even if there is only one (correct) planning-optimal solution, and some of those action-optimal solutions are wrong *even though they appear to promise a higher expected payoff than does the planning optimal solution.
I can’t see where they made this point. At the top of Section 4, they say “How, then, should the driver reason at the action stage?” and go on directly to describe action-optimality. If they said something like “One possibility is to just recompute and apply the planning-optimal solution. But if you insist …” please point out where. See also page 108:
In our case, there is only one player, who acts at different times. Because of his absent-mindedness, he had better coordinate his actions; this coordination can take place only before he starts out at the planning stage. At that point, he should choose p1 . If indeed he chose p1 , there is no problem. If by mistake he chose p2 or p3 , then that is what he should do at the action stage. (If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do.)
If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?
I really don’t see why you are having so much trouble parsing this. “If indeed he chose p1 , there is no problem” is an endorsement of the correctness of the planning-optimal solution. The sentence dealing with p2 and p3 asserts that, if you mistakenly used p2 for your first decision, then you best follow-up is to remain consistent and use p2 for your remaining two choices. The paragraph you quote to make your case is one I might well choose myself to make my case.
Edit: There are some asterisks in variable names in the original paper which I was unable to make work with the italics rules on this site. So “p2” above should be read as “p 2″
It is a statement that the planning-optimal action is the correct one, but it’s not an endorsement that it is correct to use the planning-optimality algorithm to compute what to do when you are already at an intersection. Do you see the difference?
ETA (edited to add): According to my reading of that paragraph, what they actually endorse is to compute the planning-optimal action at START, remember that, then at each intersection, compute the set of action-optimal actions, and pick the element of the set that coincides with the planning-optimal action.
BTW, you can use “\” to escape special characters like “*” and “_”.
Thx for the escape character info. That really ought to be added to the editing help popup.
Yes, I see the difference. I claim that what they are saying here is that you need to do the planning-optimal calculation in order to find p*1 as the unique best solution (among the three solutions that the action-optimal method provides). Once you have this, you can use it at the first intersection. But at the other intersections, you have some choices: either recalculate the planning-optimal solution each time, or write down enough information so that you can recognize that p*1 is the solution you are already committed to among the three (in section 5) solutions returned by the action-optimality calculation.
ETA in response to your ETA. Yes they do. Good point. I’m pretty sure there are cases more complicated than this perfectly amnesiac driver where that would be the only correct policy. (ETA:To be more specific, cases where the planning-optimal solution is not a sequential equilibrium). But then I have no reason to think that UDT would yield the correct answer in those more complicated cases either.
I deleted my previous reply since it seems unnecessary given your ETA.
I’m pretty sure there are cases more complicated than this perfectly amnesiac driver where that would be the only correct policy. (ETA:To be more specific, cases where the planning-optimal solution is not a sequential equilibrium).
What would be the only correct policy? What I wrote after “According to my reading of that paragraph”? If so, I don’t understand your “cases where the planning-optimal solution is not a sequential equilibrium”. Please explain.
What would be the only correct policy? What I wrote after “According to my reading of that paragraph”?
Yes.
If so, I don’t understand your “cases where the planning-optimal solution is not a sequential equilibrium”. Please explain.
I would have thought it would be self explanatory.
It looks like I will need to construct and analyze examples slightly more complicated that the Absent Minded Driver. That may take a while. Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
Wei has described a couple versions of UDT. His descriptions seemed to me to be mathematically rigorous. Based on Wei’s posts, I wrote this pdf, which gives just the definition of a UDT agent (as I understand it), without motivation or justification.
The difficulty with multiple agents looks like it will be very hard to get around within the UDT framework. UDT works essentially by passing the buck to an agent who is at the planning stage*. That planning-stage agent then performs a conventional expected-utility calculation.
But some scenarios seem best described by saying that there are multiple planning-stage agents. That means that UDT is subject to all of the usual difficulties that arise when you try to use expected utility alone in multiplayer games (e.g., prisoners dilemma). It’s just that these difficulties arise at the planning stage instead of at the action stage directly.
*Somewhat more accurately, the buck is passed to the UDT agent’s simulation of an agent who is at the planning stage.
What I meant was, what point were you trying to make with that statement? According to Aumann’s paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution. (My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct. That seems to disprove your “only correct policy” claim. I thought your “sequential equilibrium” line was trying to preempt this argument, but I can’t see how.
Does UDT encompass game theory, or is it limited to analyzing single-player situations?
Pretty much single-player for now. A number of people are trying to extend the ideas to multi-player situations, but it looks really hard.
Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
No, it’s not being written up further. (Nesov is writing up some of his ideas, which are meant to be an advance over UDT.)
What I meant was, what point were you trying to make with that statement? According to Aumann’s paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution.
My understanding of their paper has changed somewhat since we began this discussion. I now believe that repeating the planning-optimal analysis at every decision node is only guaranteed to give ideal results in simple cases like this one in which every decision point is in the same information set. In more complicated cases,
I can imagine that the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better. I would need to construct an example to assert this with confidence.
(My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct.
In this simple example, yes. Perhaps not in more complicated cases.
That seems to disprove your “only correct policy” claim. I thought your “sequential equilibrium” line was trying to preempt this argument, but I can’t see how.
And I can’t see how to explain it without an example
While I wait, did you see anything in Aumann’s paper that hints at “the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better”? Or is that your original research (to use Wikipedia-speak)? It occurs to me that if you’re correct about that, the authors of the paper should have realized it themselves and mentioned it somewhere, since it greatly strengthens their position.
Answering that is a bit tricky. If I am wrong, it is certainly “original research”. But my belief is based upon readings in game theory (including stuff by Aumann) which are not explicitly contained in that paper.
Please bear with me. I have a multi-player example in mind, but I hope to be able to find a single-player one which makes the reasoning clearer.
Regarding your last sentence, I must point out that the whole reason we are having this discussion is my claim to the effect that you don’t really understand their position, and hence cannot judge what does or does not strengthen it.
Ok, I now have at least a sketch of an example. I haven’t worked it out in detail, so I may be wrong, but here is what I think. In any scenario in which you gain and act on information after the planning stage, you should not use a recalculated planning-stage solution for any decisions after you have acted upon that information. Instead, you need to do the action-optimal analysis.
For example, let us complicate the absent-minded driver scenario that you diagrammed by adding an information-receipt and decision node prior to those two identical intersections. The driver comes in from the west and arrives at a T intersection where he can turn left(north) or right(south). At the intersection is a billboard advertising today’s lunch menu at Casa de Maria, his favorite restaurant. If the billboard promotes chile, he will want to turn right so as to have a good chance of reaching Maria’s for lunch. But if the billboard promotes enchiladas, which he dislikes, he probably wants to turn the other way and try for Marcello’s Pizza. Whether he turns right or left at the billboard, he will face two consecutive identical intersections (four identical intersections total). The day is cloudy, so he cannot tell whether he is traveling north or south.
Working this example in detail will take some work. Let me know if you think the work is necessary.
Once you have this, you can use it at the first intersection. But at the other intersections, you have some choices
It is a part of the problem statement that you can’t distinguish between being at any of the intersections. So you have to use the same algorithm at all of them.
either recalculate the planning-optimal solution each time
How are you getting this from their words? What about “this coordination can take place only before he starts out at the planning stage”? And “If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do”? Why would they say “hard thinking” if they meant “recalculate the planning-optimal solution”? (Especially when the planning-optimality calculation is simpler than the action-optimality calculation.)
In the comment section of Wei Dai’s post in question, taw and pengvado completed his solution so conclusively that if you really take the time to understand the object level (instead of the meta level where some people are apriori smarter because they won a prize), you can’t help but feel the snarking was justified :-)
1A. It may well be a wrong problem. if so it ought to be dissolved.
1B. If so, many theorists (including presumably nobel prize winners), have missed it since 1969.
1C. Your intuition should not be considered a persuasive argument, even by you.
2 . Even ignoring any singularitarian predictions, given the degree to which knowledge acceleration has already advanced, you should expect to see cases where old standards are blown away with seemingly little effort.
Maybe this isn’t one of those cases, but it should not surprise you if we learn that humanity as a whole has done more decision theory in the past few years than in all previous history.
Given that the similar accelerations are happening in many fields, there are probably several past-nobel-level advances by rank amateurs with no special genius.
OK, I’ve got some big guns pointed at me, so I need to respond. I need to respond intelligently and carefully. That will take some time. Within a week at most.
For a long time I also didn’t think that Newcomb’s Problem was worth thinking about. Then I read something by Eliezer that pointed out the connection to Prisoner’s Dilemma. (According to Prisoners’ Dilemma is a Newcomb Problem, others saw the connection as early as 1969.) See also my
Newcomb’s Problem vs. One-Shot Prisoner’s Dilemma where I explored how they are different as well.
I’m curious what you now think about my perspective on the Absent Minded Driver, on both the object level and meta level (assuming I convinced you that it wasn’t meant to be a snark). You’re the only person who has indicated actually having read Aumann et al.’s paper.
The possible connection between Newcomb and PD is seen by anyone who considers Jeffrey’s version of decision theory (EDT). So I have seen it mentioned by philosophers long before I had heard of EY. Game theorists, of course, reject this, unless they are analysing games with “free precommitment”. I instinctively reject it too, for what that is worth, though I am beginning to realize that publishing your unchangeable source code is pretty-much equivalent to free precommitment.
My analysis of your analysis of AMD is in my response to your comment below.
Ok, I’ve read the paper(most of it) and Wei-Dai’s article now. Two points.
In a sense, I understand how you might think that the Absent Minded Driver is no less contrived and unrealistic than Newcomb’s Paradox. Maybe different people have different intuitions as to what toy examples are informative and which are misleading. Someone else (on this thread?) responded to me recently with the example of frictionless pulleys and the like from physics. All I can tell you is that my intuition tells me that the AMD, the PD, frictionless pulleys,and even Parfit’s Hitchhiker all strike me as admirable teaching tools, whereas Newcomb problems and the old questions of irrestable force vs immovable object in physics are simply wrong problems which can only create confusion.
Reading Wei-Dai’s snarking about how the LW approach to decision theory (with zero published papers to date) is so superior to the confusion in which mere misguided Nobel laureates struggle—well, I almost threw up. It is extremely doubtful that I will continue posting here for long.
It wasn’t meant to be a snark. I was genuinely trying to figure out how the “LW approach” might be superior, because otherwise the most likely explanation is that we’re all deluded in thinking that we’re making progress. I’d be happy to take any suggestions on how I could have reworded my post so that it sounded less like a snark.
Wei-Dai wrote a post entitled The Absent-Minded Driver which I labeled “snarky”. Moreover, I suggested that the snarkiness was so bad as to be nauseating, so as to drive reasonable people to flee in horror from LW and SAIA. I here attempt to defend these rather startling opinions. Here is what Wei-Dai wrote that offended me:
The paper that Wei-Dai reviews is “The Absent-Minded Driver” by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:
Wei-Dai then proceeds to give a competent description of the problem and the standard “planning-optimality” solution of the problem. Next comes a description of an alternative seductive-but-wrong solution by Piccione and Rubinstein. I should point that everyone—P&R, Aumann, Hart, and Perry, Wei-Dai, me, and hopefully you who look into this—realizes that the alternative P&R solution is wrong. It gets the wrong result. It doesn’t win. The only problem is explaining exactly where the analysis leading to that solution went astray, and in explaining how it might be modified so as to go right. Making this analysis was, as I see it, the whole point of both papers—P&R and Aumann et al. Wei-Dai describes some characteristics of Aumann et al’s corrected version of the alternate solution. Then he (?) goes horribly astray:
But, as anyone who reads the paper carefully should see, they weren’t arguing for action-optimality as the solution. They never abandoned planning optimality. Their point is that if you insist on reasoning in this way, (and Seldin’s notion of “subgame perfection” suggests some reasons why you might!) then the algorithm they call “action-optimality” is the way to go about it.
But Wei-Dai doesn’t get this. Instead we get this analysis of how these brilliant people just haven’t had the educational advantages that LW folks have:
Let me just point out that the reason it is true that “they never argued against it” is that they had already argued for it. Check out the implications of their footnote #4!
Ok, those are the facts, as I see them. Was Wei-Dai snarky? I suppose it depends on how you define snarkiness. Taboo “snarky”. I think that he was overbearingly condescending without the slightest real reason for thinking himself superior. “Snarky” may not be the best one-word encapsulation of that attitude, but it is the one I chose. I am unapologetic. Wei-Dai somehow came to believe himself better able to see the truth than a Nobel laureate in the Nobel laureate’s field. It is a mistake he would not have made had he simply read a textbook or taken a one-semester course in the field. But I’m coming to see it as a mistake made frequently by SIAI insiders.
Let me point out that the problem of forgetful agents may seem artificial, but it is actually extremely important. An agent with perfect recall playing the iterated PD, knowing that it is to be repeated exactly 100 times, should rationally choose to defect. On the other hand, if he cannot remember how many iterations remain to be played, and knows that the other player cannot remember either, should cooperate by playing Tit-for-Tat or something similar.
Well, that is my considered response on “snarkiness”. I still have to respond on some other points, and I suspect that, upon consideration, I am going to have to eat some crow. But I’m not backing down on this narrow point. Wei-Dai blew it in interpreting Aumann’s paper. (And also, other people who know some game theory should read the paper and savor the implications of footnote #4. It is totally cool).
How is Wei Dai being condescending there? He’s pointing out how weak it is to dismiss people with these credentials by just calling them crazy. ETA: In other words, it’s an admonishment directed at LWers.
That, at any rate, was my read.
I’m sure it would be Wei-Dai’s read as well. The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary. I’m not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.
Are you essentially saying you are nauseated because Wei Dai disagreed with the authors?
No. Not at all. It is because he disagreed through the wrong channels, and then proceeded to propose rather insulting hypotheses as to why they had gotten it wrong.
Just read that list of possible reasons! And there are people here arguing that “of course we want to analyze the cause of mistakes”. Sheesh. No wonder folks here are so in love with Evolutionary Psychology.
Ok, I’m probably going to get downvoted to hell because of that last paragraph. And, you know what, that downvoting impulse due to that paragraph pretty much makes my case for why Wei Dai was wrong to do what he did. Think about it.
Interestingly enough I think that it is this paragraph that people will downvote, and not the one above. Mind you, the premise in “No wonder folks here are so in love with Evolutionary Psychology.” does seem so incredibly backward that I almost laughed.
I can understand your explanation here. Without agreeing with it myself I can see how it follows from your premises.
I’m having trouble following you.
Are you saying that you read him differently, and that he would somehow be misinterpreting himself?
The admonishment is necessary if LWers are likely to wrongly dismiss Aumann et al. as “crazy”. In other words, to think that the admonishment is necessary is to think that LWers are too inclined to dismiss other people as crazy
I got that. Who said anything about condescending to LWers?
Huh?? Surely, you troll. I am saying that Wei-Dai’s read would likely be the same as yours: that he was not condescending; that he was in fact cautioning his readers against looking down on the poor misguided Nobelists who, after all, probably had good reasons for being so mistaken. There, but for the grace of EY, go we.
Or was I really that unclear?
Condescension is a combination of content and context. When you isolated that quote as especially condescending, I thought that you read something within it that was condescending. I was confused, because the quote could just as well have come from a post arguing that LWers ought to believe that Aumann et al. are right.
It now looks like you and I read the intrinsic meaning of the quote in the same way. The question then is, does that quote, placed in context, somehow make the overall post more condescending than it already was? Wei had already said that his treatment of the AMD was better than that of Aumann et al.. He had already said that these prestigious researchers got it wrong. Do you agree that if this were true, if the experts got it wrong, then we ought to try to understand how that happened, and not just dismiss them as crazy?
Whatever condescension occurred, it occurred as soon as Wei said that he was right and Aumann et al. were wrong. How can drawing a rational inference from that belief make it more condescending?
In this light I can see where ‘condescension’ fits in. There is a difference between ‘descending to be with’ and just plain ‘being way above’. For example we could label “they are wrong” as arrogant, “they are wrong but we can empathise with them and understand their mistake” as condescending and “They are wrong, that’s the kind of person Nobel prizes go to these days?” as “contemptuous”—even though they all operate from the same “I consider myself above in this instance” premise. Wei’s paragraph could then be considered to be transferring weight from arrogance and contempt into condescension.
(I still disapprove of Perplexed’s implied criticism.)
Okay, I can see this distinction. I can see how, as a matter of social convention, “they are wrong but we should understand their mistake” could come across as more condescending than just “they are wrong”. But I really don’t like that convention. If an expert is wrong, we really do have an obligation to understand how that happened. Accepting that obligation shouldn’t be stigmatized as condescending. (Not that you implied otherwise.)
“They are probably not crazy” strikes me as “damning with faint praise”. IMHO, it definitely raises the overall condescension level.
No. Peons claim lords are wrong all the time. It is not even impolite, if you are willing to admit your mistake and withdraw your claim reasonably quickly.
Condescension starts when you attempt to “charitably” analyze the source of the error.
Of course. But if I merely had good reason to believe they were wrong, then my most urgent next step would be to determine whether it were true that they got it wrong. I would begin by communicating with the experts, either privately or through the peer-reviewed literature, so as to get some feedback as to whether they were wrong or I was mistaken. If it does indeed turn out that they were wrong, I would let them take the first shot at explaining the causes of their mistake. I doubt that I would try to analyze the cause of the mistake myself unless I were a trained historian dealing with a mistake at least 50 years old. Or, if I did try (and I probably have), I would hope that someone would point out my presumption.
Preliminary notes: You can call me “Wei Dai” (that’s firstname lastname). “He” is ok. I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole’s “Game Theory” and Joyce’s “Foundations of Causal Decision Theory” as two of the few physical books that I own.
I can’t see where they made this point. At the top of Section 4, they say “How, then, should the driver reason at the action stage?” and go on directly to describe action-optimality. If they said something like “One possibility is to just recompute and apply the planning-optimal solution. But if you insist …” please point out where. See also page 108:
If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?
I also do not see how subgame perfection is relevant here. Can you explain?
This footnote?
Since p* is the action-optimal solution, they are pointing out the formal relationship between their notion of action-optimality and Nash equilibrium. How is this footnote an argument for “it” (it being “recomputing the planning-optimal decision at each intersection and carrying it out”)?
Ok, so it is me who is convicted of condescending without having the background to justify it. :( FWIW I have never taken a course, though I have been reading in the subject for more than 45 years.
My apologies. More to come on the substance.
Relevance of Subgame perfection. Seldin suggested subgame perfection as a refinement of Nash equilibrium which requires that decisions that seemed rational at the planning stage ought to still seem rational at the action stage. This at least suggests that we might want to consider requiring “subgame perfection” even if we only have a single player making two successive decisions.
Relevance of Footnote #4. This points out that one way to think of problems where a single player makes a series of decisions is to pretend that the problem has a series of players making the decisions—one decision per player, but that these fictitious players are linked in that they all share the same payoffs (but not necessarily the same information). This is a standard “trick” in game theory, but the footnote points out that in this case, since both fictitious players have the same information (because of the absent-mindedness) the game between driver-version-1 and driver-version-2 is symmetric, and that is equivalent to the constraint p1 = p2.
Does Footnote #4 really amount to “they had already argued for [just recalculating the planning-optimal solution]”? Well, no it doesn’t really. I blew it in offering that as evidence. (Still think it is cool, though!)
Do they “argue for it” anywhere else? Yes, they do. Section 5, where they apply their methods to a slightly more complicated example, is an extended argument for the superiority of the planning-optimal solution to the action-optimal solutions. As they explain, there can be multiple action-optimal solutions, even if there is only one (correct) planning-optimal solution, and some of those action-optimal solutions are wrong *even though they appear to promise a higher expected payoff than does the planning optimal solution.
I really don’t see why you are having so much trouble parsing this. “If indeed he chose p1 , there is no problem” is an endorsement of the correctness of the planning-optimal solution. The sentence dealing with p2 and p3 asserts that, if you mistakenly used p2 for your first decision, then you best follow-up is to remain consistent and use p2 for your remaining two choices. The paragraph you quote to make your case is one I might well choose myself to make my case.
Edit: There are some asterisks in variable names in the original paper which I was unable to make work with the italics rules on this site. So “p2” above should be read as “p 2″
It is a statement that the planning-optimal action is the correct one, but it’s not an endorsement that it is correct to use the planning-optimality algorithm to compute what to do when you are already at an intersection. Do you see the difference?
ETA (edited to add): According to my reading of that paragraph, what they actually endorse is to compute the planning-optimal action at START, remember that, then at each intersection, compute the set of action-optimal actions, and pick the element of the set that coincides with the planning-optimal action.
BTW, you can use “\” to escape special characters like “*” and “_”.
Thx for the escape character info. That really ought to be added to the editing help popup.
Yes, I see the difference. I claim that what they are saying here is that you need to do the planning-optimal calculation in order to find p*1 as the unique best solution (among the three solutions that the action-optimal method provides). Once you have this, you can use it at the first intersection. But at the other intersections, you have some choices: either recalculate the planning-optimal solution each time, or write down enough information so that you can recognize that p*1 is the solution you are already committed to among the three (in section 5) solutions returned by the action-optimality calculation.
ETA in response to your ETA. Yes they do. Good point. I’m pretty sure there are cases more complicated than this perfectly amnesiac driver where that would be the only correct policy. (ETA:To be more specific, cases where the planning-optimal solution is not a sequential equilibrium). But then I have no reason to think that UDT would yield the correct answer in those more complicated cases either.
I deleted my previous reply since it seems unnecessary given your ETA.
What would be the only correct policy? What I wrote after “According to my reading of that paragraph”? If so, I don’t understand your “cases where the planning-optimal solution is not a sequential equilibrium”. Please explain.
Yes.
I would have thought it would be self explanatory.
It looks like I will need to construct and analyze examples slightly more complicated that the Absent Minded Driver. That may take a while. Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?
Wei has described a couple versions of UDT. His descriptions seemed to me to be mathematically rigorous. Based on Wei’s posts, I wrote this pdf, which gives just the definition of a UDT agent (as I understand it), without motivation or justification.
The difficulty with multiple agents looks like it will be very hard to get around within the UDT framework. UDT works essentially by passing the buck to an agent who is at the planning stage*. That planning-stage agent then performs a conventional expected-utility calculation.
But some scenarios seem best described by saying that there are multiple planning-stage agents. That means that UDT is subject to all of the usual difficulties that arise when you try to use expected utility alone in multiplayer games (e.g., prisoners dilemma). It’s just that these difficulties arise at the planning stage instead of at the action stage directly.
*Somewhat more accurately, the buck is passed to the UDT agent’s simulation of an agent who is at the planning stage.
What I meant was, what point were you trying to make with that statement? According to Aumann’s paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution. (My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct. That seems to disprove your “only correct policy” claim. I thought your “sequential equilibrium” line was trying to preempt this argument, but I can’t see how.
Pretty much single-player for now. A number of people are trying to extend the ideas to multi-player situations, but it looks really hard.
No, it’s not being written up further. (Nesov is writing up some of his ideas, which are meant to be an advance over UDT.)
My understanding of their paper has changed somewhat since we began this discussion. I now believe that repeating the planning-optimal analysis at every decision node is only guaranteed to give ideal results in simple cases like this one in which every decision point is in the same information set. In more complicated cases, I can imagine that the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better. I would need to construct an example to assert this with confidence.
In this simple example, yes. Perhaps not in more complicated cases.
And I can’t see how to explain it without an example
While I wait, did you see anything in Aumann’s paper that hints at “the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better”? Or is that your original research (to use Wikipedia-speak)? It occurs to me that if you’re correct about that, the authors of the paper should have realized it themselves and mentioned it somewhere, since it greatly strengthens their position.
Answering that is a bit tricky. If I am wrong, it is certainly “original research”. But my belief is based upon readings in game theory (including stuff by Aumann) which are not explicitly contained in that paper.
Please bear with me. I have a multi-player example in mind, but I hope to be able to find a single-player one which makes the reasoning clearer.
Regarding your last sentence, I must point out that the whole reason we are having this discussion is my claim to the effect that you don’t really understand their position, and hence cannot judge what does or does not strengthen it.
Ok, I now have at least a sketch of an example. I haven’t worked it out in detail, so I may be wrong, but here is what I think. In any scenario in which you gain and act on information after the planning stage, you should not use a recalculated planning-stage solution for any decisions after you have acted upon that information. Instead, you need to do the action-optimal analysis.
For example, let us complicate the absent-minded driver scenario that you diagrammed by adding an information-receipt and decision node prior to those two identical intersections. The driver comes in from the west and arrives at a T intersection where he can turn left(north) or right(south). At the intersection is a billboard advertising today’s lunch menu at Casa de Maria, his favorite restaurant. If the billboard promotes chile, he will want to turn right so as to have a good chance of reaching Maria’s for lunch. But if the billboard promotes enchiladas, which he dislikes, he probably wants to turn the other way and try for Marcello’s Pizza. Whether he turns right or left at the billboard, he will face two consecutive identical intersections (four identical intersections total). The day is cloudy, so he cannot tell whether he is traveling north or south.
Working this example in detail will take some work. Let me know if you think the work is necessary.
Ok, I see. I’ll await your example.
It is a part of the problem statement that you can’t distinguish between being at any of the intersections. So you have to use the same algorithm at all of them.
How are you getting this from their words? What about “this coordination can take place only before he starts out at the planning stage”? And “If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do”? Why would they say “hard thinking” if they meant “recalculate the planning-optimal solution”? (Especially when the planning-optimality calculation is simpler than the action-optimality calculation.)
You can use a backslash to escape special characters in markdown.
If you type \*, that will show up as * in the posted text.
In the comment section of Wei Dai’s post in question, taw and pengvado completed his solution so conclusively that if you really take the time to understand the object level (instead of the meta level where some people are apriori smarter because they won a prize), you can’t help but feel the snarking was justified :-)
1A. It may well be a wrong problem. if so it ought to be dissolved.
1B. If so, many theorists (including presumably nobel prize winners), have missed it since 1969.
1C. Your intuition should not be considered a persuasive argument, even by you.
2 . Even ignoring any singularitarian predictions, given the degree to which knowledge acceleration has already advanced, you should expect to see cases where old standards are blown away with seemingly little effort.
Maybe this isn’t one of those cases, but it should not surprise you if we learn that humanity as a whole has done more decision theory in the past few years than in all previous history.
Given that the similar accelerations are happening in many fields, there are probably several past-nobel-level advances by rank amateurs with no special genius.
OK, I’ve got some big guns pointed at me, so I need to respond. I need to respond intelligently and carefully. That will take some time. Within a week at most.
A couple more comments:
For a long time I also didn’t think that Newcomb’s Problem was worth thinking about. Then I read something by Eliezer that pointed out the connection to Prisoner’s Dilemma. (According to Prisoners’ Dilemma is a Newcomb Problem, others saw the connection as early as 1969.) See also my Newcomb’s Problem vs. One-Shot Prisoner’s Dilemma where I explored how they are different as well.
I’m curious what you now think about my perspective on the Absent Minded Driver, on both the object level and meta level (assuming I convinced you that it wasn’t meant to be a snark). You’re the only person who has indicated actually having read Aumann et al.’s paper.
The possible connection between Newcomb and PD is seen by anyone who considers Jeffrey’s version of decision theory (EDT). So I have seen it mentioned by philosophers long before I had heard of EY. Game theorists, of course, reject this, unless they are analysing games with “free precommitment”. I instinctively reject it too, for what that is worth, though I am beginning to realize that publishing your unchangeable source code is pretty-much equivalent to free precommitment.
My analysis of your analysis of AMD is in my response to your comment below.