if I assume that everyone’s utility function is exactly like mine
Did you just switch context again? My claim is about what happens if everyone strictly prefers to tie rather than to lose. In this case, given others’ strategies, any individual’s optimal strategy is to answer 2⁄3 of the average. The only way everyone can answer 2⁄3 of the average is if everyone plays 0, and this is the only strategy that nobody has an incentive to deviate from.
Maybe I’m being dense, but bear with me for a moment....
Assume: I get X utilons from winning, Y from tying, and Z from losing, where X >= Y >= Z. Everyone plying the game has exactly the same preferences.
If I (and everyone else) play 0, I get Y utilons. Straightforward.
If I play a value that gives me W chance of winning outright, and (1-W) chance of losing (with an inconsequential chance of tying because I added a small random offset), I will gain W X - (1 - W) Z utilons on average.
Assume W is fairly low, the worst and most likely case being 1/N where N is the number of participants, since we’re assuming everyone is exactly like me.
Therefore, if Y > (X/N—Z + Z/N), I (and everyone) should play 0. Otherwise, we should play the thing that gives us W chance of winning. (hopefully I did the algebra right)
So, depending on the values for X, Y, and Z (and N), we could get your scenario or mine.
If Y is close to X, we get yours. If it is greatly lower than X, we will probably get mine.
All that to say I can create a scenario where the Nash equilibrium really is for everyone to play a small positive number by tweaking the players’ utility functions, even given the constraint that winning, tying, and losing are valued in that order.
If this is clear to you, then we’ve been talking past each other. If not, then I don’t understand Nash equilibrium very well (or I’m an incredibly sucky writer).
EDIT: on second thought, I think my math is probably quite bad, esp. with respect to Z. Anyway, perhaps the central idea of my post is still intelligible, so I’ll leave it be.
EDIT2: Ah, I got a sign backwards (consider that if the penalty for losing is your house gets burned down, Z is a large negative number).
There are some games that don’t have a Nash equilibrium. Consider a 1-player game where the available strategies are the numbers between 0 and 1, and your payoff is 1-x if you pick x>0 and 0 if you pick x=0. There is no Nash equilibrium.
If many players assign 0 utilons to tying and losing in this game, and 1 to winning, then 0 is still a Nash equilibrium, but if there is any positive chance that some gimp will submit a nonzero answer just for the hell of it, then you definately shouldn’t play zero.
By the way, I guessed 100. I’m not very good with numbers—I think 100 is the best answer, right ;-0
A Nash equilibrium is a set of strategies from which no player has an incentive to deviate, holding others’ strategies constant. Take any putative set of (pure) equilibrium strategies; if there is any individual who loses when this set of strategies is played, then they have an incentive to change their guess to 2⁄3 of the average, and this set of strategies is not a Nash equilibrium. This implies that you are not in Nash equilibrium unless everyone wins.*
Holding other players’ strategies constant, you have a single optimal strategy, which is to play 2⁄3 of the average. If there is another player who has already guessed 2⁄3 of the (new) average then you tie with probability 1; if there is not, you win with probability 1.
* Note that everyone winning is necessary, but not sufficient for a Nash equilibrium. Everyone playing 67 lets everyone win, but it is not a Nash equilibrium. If if anyone prefers not to tie, they could deviate and win by themselves.
So games in which there cannot be a tie have no Nash equilibrium?
I must have misread the wikipedia page; I thought the requirement was that there’s no way to do better with an alternative strategy.
I was also assuming that everyone guesses at the same time, as otherwise the person to play last can always win (and so everyone will play 0). But this means it’s no longer a perfect-information game, and that there’s not going to be a Nash equilibrium. Thanks for your patience :)
So games in which there cannot be a tie have no Nash equilibrium?
No, that’s not a general rule. It’s just the case that in this particular game, if you’re losing you always have a better option that can be achieved just by changing your own strategy. If your prospects for improvement relied on others changing their strategies too, then you could lose and still be in a Nash equilibrium. (For an example of such a game, see battle of the sexes))
I thought the requirement was that there’s no way to do better with an alternative strategy.
Sort of. It’s that there’s no way to do better with an alternative strategy, given perfect knowledge of others’ strategies.
I was also assuming that everyone guesses at the same time
They do in the actual game; it’s just that that’s not relevant to evaluating what counts as a Nash equilibrium.
But this means it’s no longer a perfect-information game, and that there’s not going to be a Nash equilibrium.
I’m not entirely clear what you mean by the first half of this sentence, but the conclusion is false. Even if everyone guessed in turn, there would still be a Nash equilibrium with everyone playing zero.
Did you just switch context again? My claim is about what happens if everyone strictly prefers to tie rather than to lose. In this case, given others’ strategies, any individual’s optimal strategy is to answer 2⁄3 of the average. The only way everyone can answer 2⁄3 of the average is if everyone plays 0, and this is the only strategy that nobody has an incentive to deviate from.
Maybe I’m being dense, but bear with me for a moment....
Assume: I get X utilons from winning, Y from tying, and Z from losing, where X >= Y >= Z. Everyone plying the game has exactly the same preferences.
If I (and everyone else) play 0, I get Y utilons. Straightforward.
If I play a value that gives me W chance of winning outright, and (1-W) chance of losing (with an inconsequential chance of tying because I added a small random offset), I will gain W X - (1 - W) Z utilons on average.
Assume W is fairly low, the worst and most likely case being 1/N where N is the number of participants, since we’re assuming everyone is exactly like me.
Therefore, if Y > (X/N—Z + Z/N), I (and everyone) should play 0. Otherwise, we should play the thing that gives us W chance of winning. (hopefully I did the algebra right)
So, depending on the values for X, Y, and Z (and N), we could get your scenario or mine.
If Y is close to X, we get yours. If it is greatly lower than X, we will probably get mine.
All that to say I can create a scenario where the Nash equilibrium really is for everyone to play a small positive number by tweaking the players’ utility functions, even given the constraint that winning, tying, and losing are valued in that order.
If this is clear to you, then we’ve been talking past each other. If not, then I don’t understand Nash equilibrium very well (or I’m an incredibly sucky writer).
EDIT: on second thought, I think my math is probably quite bad, esp. with respect to Z. Anyway, perhaps the central idea of my post is still intelligible, so I’ll leave it be.
EDIT2: Ah, I got a sign backwards (consider that if the penalty for losing is your house gets burned down, Z is a large negative number).
W X - (1 - W) Z should be W X + (1 - W) Z
Y > (X/N—Z + Z/N) should be Y > (X/N + Z—Z/N)
There are some games that don’t have a Nash equilibrium. Consider a 1-player game where the available strategies are the numbers between 0 and 1, and your payoff is 1-x if you pick x>0 and 0 if you pick x=0. There is no Nash equilibrium.
If many players assign 0 utilons to tying and losing in this game, and 1 to winning, then 0 is still a Nash equilibrium, but if there is any positive chance that some gimp will submit a nonzero answer just for the hell of it, then you definately shouldn’t play zero.
By the way, I guessed 100. I’m not very good with numbers—I think 100 is the best answer, right ;-0
A Nash equilibrium is a set of strategies from which no player has an incentive to deviate, holding others’ strategies constant. Take any putative set of (pure) equilibrium strategies; if there is any individual who loses when this set of strategies is played, then they have an incentive to change their guess to 2⁄3 of the average, and this set of strategies is not a Nash equilibrium. This implies that you are not in Nash equilibrium unless everyone wins.*
Holding other players’ strategies constant, you have a single optimal strategy, which is to play 2⁄3 of the average. If there is another player who has already guessed 2⁄3 of the (new) average then you tie with probability 1; if there is not, you win with probability 1.
* Note that everyone winning is necessary, but not sufficient for a Nash equilibrium. Everyone playing 67 lets everyone win, but it is not a Nash equilibrium. If if anyone prefers not to tie, they could deviate and win by themselves.
So games in which there cannot be a tie have no Nash equilibrium?
I must have misread the wikipedia page; I thought the requirement was that there’s no way to do better with an alternative strategy.
I was also assuming that everyone guesses at the same time, as otherwise the person to play last can always win (and so everyone will play 0). But this means it’s no longer a perfect-information game, and that there’s not going to be a Nash equilibrium. Thanks for your patience :)
No, that’s not a general rule. It’s just the case that in this particular game, if you’re losing you always have a better option that can be achieved just by changing your own strategy. If your prospects for improvement relied on others changing their strategies too, then you could lose and still be in a Nash equilibrium. (For an example of such a game, see battle of the sexes))
Sort of. It’s that there’s no way to do better with an alternative strategy, given perfect knowledge of others’ strategies.
They do in the actual game; it’s just that that’s not relevant to evaluating what counts as a Nash equilibrium.
I’m not entirely clear what you mean by the first half of this sentence, but the conclusion is false. Even if everyone guessed in turn, there would still be a Nash equilibrium with everyone playing zero.
No problem. ;)
Sorry I didn’t/can’t continue the conversation; I’ve gotten rather busy.