You’re right—and as my professor said, since I had a better initial situation, I should have been able to do at least as well as my opponent.
Tearing up two slips would have been unlikely to beat tearing up one—the latter creates the necessary scarcity (and causes the auction) while not diminishing the total number of deals that much. In fact, since my opponent beat five dollars, I couldn’t have won with two torn slips, but I should at least have been even with my opponent, and might (as my reply to CronoDAS notes) have been able to win via a trick.
I could refuse a bad deal, but my partners knew that I was trying to win $20, where they stood to lose at most a dollar relative to agreeing to the deal; they had less to lose, and thereby had a stronger bargaining position (the literature on Nash bargaining is relevant).
I wasn’t there, but it seems unlikely that the reason people bargained so hard as to almost refuse to give you anything (10-90 split) while the other side gave the other guy almost ALL of their money was merely that they realized you had an unfair advantage. Unless this was brought up frequently as the reason, I find it highly dubious, and would guess that the other guy was just more well liked in general than you were, such that the people in the class wanted him to win and didn’t care whether or not you won (or even wanted you to lose).
Always possible, but I wasn’t suggesting that I had an unfair advantage. Quite the opposite: game-theoretically, my opponent was in a vastly preferable position, as long as I didn’t tear up one of my slips.
To see this, imagine there were only two partners per player. One player can make a deal with each partner, where the other player can only make one deal total. The partners of the former player are playing the Nash bargaining game; the partners of the latter are participating in an auction for the right to play the Nash bargaining game.
Since the theoretical/canonical outcome of the bargaining game is that the person in the stronger bargaining position (the person with less to lose from walking away) takes a larger share of the money, it’s not a big leap to see how the outcomes should differ. When I could make a deal with each partner, they were in the stronger bargaining position (not being eligible for the $20 meta-prize). In the other case, they knew that they could only get some money if they were allowed to come to the table, so to speak, and so they competed with each other in essentially a bidding war for that opportunity.
Does that make sense? As further evidence that I wasn’t simply unusually unpopular, I should note that the outcome I described was standard, happening year after year in the class. Indeed, the professor relied on it turning out that way to make a point; he would have looked rather foolish if I had come out on top despite the apparent structural disadvantages.
If you are interested in those kind of social dynamics, I highly recommend studying game theory—it’s absolutely full of surprising results and predictions.
In one class, we proved that for a certain model of soccer penalty kicks, if a kicker got better at shooting (increased the chance of scoring, ceteris paribus) but only in one direction (left or right), he actually was less likely to score because it was easier for the goalie to predict which side he would favor.
That doesn’t sound right. Why couldn’t you simply choose to keep on randomizing 50/50? (Or better yet, calculate an optimal mixed strategy which should be at least as good as randomizing. But my immediate reaction is just generated by the heuristic that capability improvements should never hurt you because you can always choose to go on doing what you would have done previously.)
Ah, of course, I forgot a prepositional phrase: he actually was less likely to score on that side because it was easier for the goalie to predict which side he would favor.
(Incidentally, this proposition has been empirically tested in G.C. Moschini, Economics Letters 85 (2004) 365–371)
However, we do have to be careful in games of strategy in selecting what we call capability improvements. Increasing my payoff in a single cell can change the relationship between cells, preventing me from credibly committing to a particular strategy and thereby diminishing the outcome of the game.
As an example, imagine we have a game defined as follows:
(U,L) ⇒ (1,1)
(U,R) ⇒ (0,0)
(D,L) ⇒ (0,0.9)
(D,R) ⇒ (1,0)
where the pairs are (x strategy, y strategy) mapped to (payoff to x, payoff to y).
The unique Nash equilibrium is (U,L), so each player receives a payoff of 1.
Now “improve” player y’s capabilities by making (U,R) ⇒ (0,1.1). Now there is no equilibrium in pure strategies, and the unique mixed strategy equilibrium is: Pr(U) = 0.9, Pr(L) = 0.5. Expected payoff to the “improved” player is 0.99, and to the other player 0.5, both down from their previous equilibrium values of 1 each, and the magnitude of the effect damaging player y increases as her payoff to (D,L) decreases (derivations available on request).
Of the top of my head, I suspect your heuristic applies in zero-sum games, but not necessarily elsewhere. Unless the players could read each other’s source code...
Related: here’s a fascinating recent Reddit thread about generating random numbers with your brain while playing poker. I’m curious if the LW community can come up with better ways, because the ones proposed there strike me as inadequate. IMO, memorizing a longish string of random digits beforehand was the best strategy proposed.
doesn’t seem likely that you could actually end up doing worse that those with fewer slips
you mean I should be able even without tearing up a slip or otherwise limiting my own options, then no—I was in a weaker bargaining position at the beginning, and game-theoretically I should have ended up worse than my opponent. That was a key finding of Thomas Schelling’s, though he applied it to nuclear warfare (see The Strategy of Conflict and also the link at the beginning of this post for more info on his bare-knuckle game theory).
You’re right—and as my professor said, since I had a better initial situation, I should have been able to do at least as well as my opponent.
Tearing up two slips would have been unlikely to beat tearing up one—the latter creates the necessary scarcity (and causes the auction) while not diminishing the total number of deals that much. In fact, since my opponent beat five dollars, I couldn’t have won with two torn slips, but I should at least have been even with my opponent, and might (as my reply to CronoDAS notes) have been able to win via a trick.
I could refuse a bad deal, but my partners knew that I was trying to win $20, where they stood to lose at most a dollar relative to agreeing to the deal; they had less to lose, and thereby had a stronger bargaining position (the literature on Nash bargaining is relevant).
I wasn’t there, but it seems unlikely that the reason people bargained so hard as to almost refuse to give you anything (10-90 split) while the other side gave the other guy almost ALL of their money was merely that they realized you had an unfair advantage. Unless this was brought up frequently as the reason, I find it highly dubious, and would guess that the other guy was just more well liked in general than you were, such that the people in the class wanted him to win and didn’t care whether or not you won (or even wanted you to lose).
Always possible, but I wasn’t suggesting that I had an unfair advantage. Quite the opposite: game-theoretically, my opponent was in a vastly preferable position, as long as I didn’t tear up one of my slips.
To see this, imagine there were only two partners per player. One player can make a deal with each partner, where the other player can only make one deal total. The partners of the former player are playing the Nash bargaining game; the partners of the latter are participating in an auction for the right to play the Nash bargaining game.
Since the theoretical/canonical outcome of the bargaining game is that the person in the stronger bargaining position (the person with less to lose from walking away) takes a larger share of the money, it’s not a big leap to see how the outcomes should differ. When I could make a deal with each partner, they were in the stronger bargaining position (not being eligible for the $20 meta-prize). In the other case, they knew that they could only get some money if they were allowed to come to the table, so to speak, and so they competed with each other in essentially a bidding war for that opportunity.
Does that make sense? As further evidence that I wasn’t simply unusually unpopular, I should note that the outcome I described was standard, happening year after year in the class. Indeed, the professor relied on it turning out that way to make a point; he would have looked rather foolish if I had come out on top despite the apparent structural disadvantages.
If that happens every year then I think that is strong evidence that the reasons you provide are correct. Surprising and interesting…
If you are interested in those kind of social dynamics, I highly recommend studying game theory—it’s absolutely full of surprising results and predictions.
In one class, we proved that for a certain model of soccer penalty kicks, if a kicker got better at shooting (increased the chance of scoring, ceteris paribus) but only in one direction (left or right), he actually was less likely to score because it was easier for the goalie to predict which side he would favor.
That doesn’t sound right. Why couldn’t you simply choose to keep on randomizing 50/50? (Or better yet, calculate an optimal mixed strategy which should be at least as good as randomizing. But my immediate reaction is just generated by the heuristic that capability improvements should never hurt you because you can always choose to go on doing what you would have done previously.)
Ah, of course, I forgot a prepositional phrase: he actually was less likely to score on that side because it was easier for the goalie to predict which side he would favor.
(Incidentally, this proposition has been empirically tested in G.C. Moschini, Economics Letters 85 (2004) 365–371)
However, we do have to be careful in games of strategy in selecting what we call capability improvements. Increasing my payoff in a single cell can change the relationship between cells, preventing me from credibly committing to a particular strategy and thereby diminishing the outcome of the game.
As an example, imagine we have a game defined as follows:
(U,L) ⇒ (1,1)
(U,R) ⇒ (0,0)
(D,L) ⇒ (0,0.9)
(D,R) ⇒ (1,0)
where the pairs are (x strategy, y strategy) mapped to (payoff to x, payoff to y).
The unique Nash equilibrium is (U,L), so each player receives a payoff of 1.
Now “improve” player y’s capabilities by making (U,R) ⇒ (0,1.1). Now there is no equilibrium in pure strategies, and the unique mixed strategy equilibrium is: Pr(U) = 0.9, Pr(L) = 0.5. Expected payoff to the “improved” player is 0.99, and to the other player 0.5, both down from their previous equilibrium values of 1 each, and the magnitude of the effect damaging player y increases as her payoff to (D,L) decreases (derivations available on request).
Of the top of my head, I suspect your heuristic applies in zero-sum games, but not necessarily elsewhere. Unless the players could read each other’s source code...
Related: here’s a fascinating recent Reddit thread about generating random numbers with your brain while playing poker. I’m curious if the LW community can come up with better ways, because the ones proposed there strike me as inadequate. IMO, memorizing a longish string of random digits beforehand was the best strategy proposed.
As a further note, though, if by
you mean I should be able even without tearing up a slip or otherwise limiting my own options, then no—I was in a weaker bargaining position at the beginning, and game-theoretically I should have ended up worse than my opponent. That was a key finding of Thomas Schelling’s, though he applied it to nuclear warfare (see The Strategy of Conflict and also the link at the beginning of this post for more info on his bare-knuckle game theory).