This notion of mechanism design, and more generally of rational play, is certainly interesting mathematics, but in practice it often leads to mechanisms that consistently perform very badly (sometimes it gives good mechanisms, but that is no thanks to the formalism).
When you say that a mechanism “performs badly”, do you mean that it performs badly for one party (and hence very well for the other party) or do you mean that it performs badly for all parties to the attempted transaction?
I’m just saying that you might use a different model if you re-examined the maxim “rational play ends at a Nash equilibrium” and its justification.
Could you re-examined the maxim “rational play ends at a Nash equilibrium”? The usual justification is that rational play can not possibly end anywhere else—otherwise one rational player or the other would change strategies. What is wrong with that, in a two person game? For that matter, doesn’t the justification still work when there are many players?
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly. Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
When I say that rational play may not “end” at a Nash equilibrium, I mean that when rational players (fully aware of each others’ rationality, etc.) sit down to play a game, we should not be too confident that they will play a Nash equilibrium. I think my objection to your reasoning is that the players are not sequentially given opportunities to deviate; they choose a strategy and then play it. That is the definition of a strategy; if you are allowed to change your strategy iteratively then you are playing a new game, in which the strategy set has simply been enlarged and to which a similar criticism applies. Here is an example which at least calls into doubt the normal justification.
Suppose that a mechanism with two Nash equilibria, A and B, is deployed commonly at auctions all around the world. Because the mechanism was carefully designed, the goods are allocated efficiently at both equilibria. In Italy, everyone plays equilibrium A. Knowing this, and aware of the rationality of the average Italian, all Italians participating at auctions in Italy select equilibrium A. In America, everyone plays equilibrium B. Knowing this, and aware of the rationality of the average American, all Americans participating at auctions in America select equilibrium B. Now a poor American tourist in Italy participates in an auction, and (ignorant of the world as Americans are) he tries to play equilibrium B. Consequently, the auction fails to allocate goods efficiently—the mechanism made no guarantees about what happened when some individuals played from one equilibrium and others played from a different equilibrium. The American’s failure cannot be attributed to some failure of rationality; after playing, the Italians might also all wish that they had changed their strategy. This is also a real problem for mechanisms; there are certain classes of problems and mechanisms for which you can prove that this sort of thing will always be possible. You can try, as the mechanism designer, to suggest an equilibrium A to the players. But if one equilibrium is better for some players and worse for others, why should the players automatically accept your proposed equilibrium? If they all choose to play another Nash equilibrium B which is better for all of them, are they behaving irrationally?
(Of course this example does not apply to dominant strategy truthful mechanisms.)
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly.
Thx for the reference to Abreu-Matsushima (in your response to noitanigami). I wasn’t familiar with that. Nonetheless, to the extent I understood, the mechanism only fails in the case of collusion among the bidders (violating an assumption of the proof—right?). And the seller can protect himself by making a bid himself on each item in the lot (a minimum selling price).
Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
I assume you are referring to VCG here. Yeah, chalk up another failure for excessively iterated removal of dominated strategies. It seems we really do need a theory of “trembling brain equilibrium”. But, then, short of a fully competitive market, nothing achieves Pareto optimality, so I don’t think VCG should be judged too harshly. It is not a practical mechanism, but it is somewhat enlightening.
Regarding Nash equilibrium:
Here is an example which at least calls into doubt the normal justification.
But your example of the American playing B while the Italians play A is not a Nash equilibrium. Your example only demonstrates that it is foolish to promote a mechanism for which the equilibrium is not unique.
To clarify: Abreu-Matsushima fails in practice, regardless of whether there is collusion (and certainly it fails entirely if there is a coalition of even two players). VCG is dominant strategy truthful, but fails in the presence of even two colluding players. I agree that VCG is extremely interesting, but I also think that you should not consider the problem solved once you know VCG. Also, there are mechanisms which do much better than competitive markets can hope to. The question now is how well a benevolent dictator can allocate goods (or whatever you are trying to do).
I agree that my example is not a Nash equilibrium. The point was that rational players may not play a Nash equilibrium. If your notion of a reasonable solution is “it works at equilibria” then sure, this isn’t a counterexample. But presumably the minimal thing you would want is “it works when the players are all perfectly rational and don’t conclude” which this example shows isn’t even satisfied if there are multiple Nash equilibria.
Most mechanisms don’t have unique Nash equilibrium. The revelation principle also doesn’t preserve the uniqueness of a Nash equilibrium, if you happened to have one at the beginning.
A Nash equilibrium is frequently not Pareto efficient; if everyone changed their strategy at once, everyone could do better.
The Traveler’s Dilemma is a game that’s similar to the Prisoner’s Dilemma, and humans usually don’t play the Nash Equilibrium strategy.
In other words,
This notion of mechanism design, and more generally of rational play, is certainly interesting mathematics, but in practice it often leads to mechanisms that consistently perform very badly (sometimes it gives good mechanisms, but that is no thanks to the formalism)
means “people often don’t behave the way game theory says they should, and assuming that they will is often foolish.”
A Nash equilibrium is frequently not Pareto efficient; if everyone changed their strategy at once, everyone could do better.
If everyone does better at a different Nash equilibrium, then that just shows that being a NE is necessary, but not sufficient for mutual rationality.
If everyone does better at a joint strategy that is not an NE (PD, for example), then one of the players is not playing rationally—he could do better with another strategy, assuming the other player stands pat.
… people often don’t behave the way game theory says they should, and assuming that they will is often foolish.
Assuming that they won’t be rational can often be foolish too.
Rational-agent game theory is not claimed to have descriptive validity; its validity is prescriptive or normative. Or, to be more precise, it provides normatively valid advice to you, under the assumption that it is descriptively valid for everyone else.
And yes, I do appreciate that this is a very weird kind of validity for a body of theory to claim for itself.
When you say that a mechanism “performs badly”, do you mean that it performs badly for one party (and hence very well for the other party) or do you mean that it performs badly for all parties to the attempted transaction?
Could you re-examined the maxim “rational play ends at a Nash equilibrium”? The usual justification is that rational play can not possibly end anywhere else—otherwise one rational player or the other would change strategies. What is wrong with that, in a two person game? For that matter, doesn’t the justification still work when there are many players?
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly. Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
When I say that rational play may not “end” at a Nash equilibrium, I mean that when rational players (fully aware of each others’ rationality, etc.) sit down to play a game, we should not be too confident that they will play a Nash equilibrium. I think my objection to your reasoning is that the players are not sequentially given opportunities to deviate; they choose a strategy and then play it. That is the definition of a strategy; if you are allowed to change your strategy iteratively then you are playing a new game, in which the strategy set has simply been enlarged and to which a similar criticism applies. Here is an example which at least calls into doubt the normal justification.
Suppose that a mechanism with two Nash equilibria, A and B, is deployed commonly at auctions all around the world. Because the mechanism was carefully designed, the goods are allocated efficiently at both equilibria. In Italy, everyone plays equilibrium A. Knowing this, and aware of the rationality of the average Italian, all Italians participating at auctions in Italy select equilibrium A. In America, everyone plays equilibrium B. Knowing this, and aware of the rationality of the average American, all Americans participating at auctions in America select equilibrium B. Now a poor American tourist in Italy participates in an auction, and (ignorant of the world as Americans are) he tries to play equilibrium B. Consequently, the auction fails to allocate goods efficiently—the mechanism made no guarantees about what happened when some individuals played from one equilibrium and others played from a different equilibrium. The American’s failure cannot be attributed to some failure of rationality; after playing, the Italians might also all wish that they had changed their strategy. This is also a real problem for mechanisms; there are certain classes of problems and mechanisms for which you can prove that this sort of thing will always be possible. You can try, as the mechanism designer, to suggest an equilibrium A to the players. But if one equilibrium is better for some players and worse for others, why should the players automatically accept your proposed equilibrium? If they all choose to play another Nash equilibrium B which is better for all of them, are they behaving irrationally?
(Of course this example does not apply to dominant strategy truthful mechanisms.)
Thx for the reference to Abreu-Matsushima (in your response to noitanigami). I wasn’t familiar with that. Nonetheless, to the extent I understood, the mechanism only fails in the case of collusion among the bidders (violating an assumption of the proof—right?). And the seller can protect himself by making a bid himself on each item in the lot (a minimum selling price).
I assume you are referring to VCG here. Yeah, chalk up another failure for excessively iterated removal of dominated strategies. It seems we really do need a theory of “trembling brain equilibrium”. But, then, short of a fully competitive market, nothing achieves Pareto optimality, so I don’t think VCG should be judged too harshly. It is not a practical mechanism, but it is somewhat enlightening.
Regarding Nash equilibrium:
But your example of the American playing B while the Italians play A is not a Nash equilibrium. Your example only demonstrates that it is foolish to promote a mechanism for which the equilibrium is not unique.
To clarify: Abreu-Matsushima fails in practice, regardless of whether there is collusion (and certainly it fails entirely if there is a coalition of even two players). VCG is dominant strategy truthful, but fails in the presence of even two colluding players. I agree that VCG is extremely interesting, but I also think that you should not consider the problem solved once you know VCG. Also, there are mechanisms which do much better than competitive markets can hope to. The question now is how well a benevolent dictator can allocate goods (or whatever you are trying to do).
I agree that my example is not a Nash equilibrium. The point was that rational players may not play a Nash equilibrium. If your notion of a reasonable solution is “it works at equilibria” then sure, this isn’t a counterexample. But presumably the minimal thing you would want is “it works when the players are all perfectly rational and don’t conclude” which this example shows isn’t even satisfied if there are multiple Nash equilibria.
Most mechanisms don’t have unique Nash equilibrium. The revelation principle also doesn’t preserve the uniqueness of a Nash equilibrium, if you happened to have one at the beginning.
A Nash equilibrium is frequently not Pareto efficient; if everyone changed their strategy at once, everyone could do better.
The Traveler’s Dilemma is a game that’s similar to the Prisoner’s Dilemma, and humans usually don’t play the Nash Equilibrium strategy.
In other words,
means “people often don’t behave the way game theory says they should, and assuming that they will is often foolish.”
If everyone does better at a different Nash equilibrium, then that just shows that being a NE is necessary, but not sufficient for mutual rationality.
If everyone does better at a joint strategy that is not an NE (PD, for example), then one of the players is not playing rationally—he could do better with another strategy, assuming the other player stands pat.
Assuming that they won’t be rational can often be foolish too.
Rational-agent game theory is not claimed to have descriptive validity; its validity is prescriptive or normative. Or, to be more precise, it provides normatively valid advice to you, under the assumption that it is descriptively valid for everyone else. And yes, I do appreciate that this is a very weird kind of validity for a body of theory to claim for itself.