I’m confused about defection becoming a dominant strategy.. Because the existence of a dominant strategy suggests to me that there should exist a unique Nash equilibrium here, which is not the case. Everyone defecting is a Nash equilibrium, but 50 people cooperating and 49 defecting is a Nash equilibrium as well, and a better one at that. Something (quite likely my intuition regarding Nash equilibria in games with more than 2 players) is off here. Also, it is of course possible to calculate the optimal probability that we should defect and I agree with FeepingCreature that this should be 0.5-e, where e depends on the size of the player base and goes to 0 when the player base becomes infinite. But I highly doubt that there’s an elegant formula for it. It seems (in my head at least) that already for, say, n=5 you have to do quite a bit of calculation, let alone n=99.
Nice. If we analyze the game using Vitalik’s 2x2 payoff matrix, defection is a dominant strategy. But now I see that’s not how game theorists would use this phrase. They would work with the full 99-dimensional matrix, and there defection is not a dominant strategy, because as you say, it’s a bad strategy if we know that 49 other people are cooperating, and 49 other people are defecting.
There’s a sleight of hands going on in Vitalik’s analysis, and it is located at the phrase “regardless of one’s epistemic beliefs [one is better off defecting]”. If my epistemic belief is that 49 other people are cooperating, and 49 other people are defecting, then it’s not true that defection is my best strategy. Of course, Vitalik’s 2x2 matrix just does not allow me to have such refined epistemic beliefs: I have to get by with “attack succeeds” versus “attack fails”.
Which kind of makes sense, because it’s true that I probably won’t find myself in a situation where I know for sure that 49 other people are cooperating, and 49 other people are defecting, so the correct game theoretic definition of dominant strategy is probably less relevant here than something like Vitalik’s “aggregate” version. Still, there are assumptions here that are not clear from the original analysis.
So, I did not forget about that particular case. In my particular brand of cryptoeconomic analysis, I try to decompose cooperation incentives into three types:
Incentives generated by the protocol
Altruism
Incentives arising from the desire to have the protocol succeed because one has a stake in it
I often group (2) and (3) into one category, “altruism-prime”, but here we can separate them.
The important point is that category 1 incentives are always present as long as the protocol specifies them, category 2 incentives are always present, but the size of category 3 incentives is proportional to the “probability of being pivotal” of each node—essentially, the probability that the node actually is in a situation where its activity will determine the outcome of the game.
Note that I do not consider 49⁄50 Nash equilibria realistic; in real massively multiplayer games, the level of confusion, asynchronicity, trembling hands/irrational players, bounded rationality, etc, is such that I think it’s impossible for such a finely targeted equilibrium to maintain itself (this is also the primary keystone of my case against standard and dominant assurance contracts). Hence why I prefer to think of the probability distribution on the number of players that will play a particular strategy and from there the probability of a single node being pivotal.
In the case of cryptoeconomic consensus protocols, I consider it desirable to achieve a hard bound of the form “the attacker must spend capital of at least C/k” where C is the amount of capital invested by all participants in the network and k is some constant. Since we cannot prove that the probability of being pivotal will be above any particular 1/k, I generally prefer to assume that it is simply zero (ie, the ideal environment of an infinite number of nodes of zero size). In this environment, my usage of “dominant strategy” is indeed fully correct. However, in cases where hostile parties are involved, I assume that the hostile parties are all colluding; this maximally hard double-standard is a sort of principle of charity that I believe we should hold to.
I’m confused about defection becoming a dominant strategy.. Because the existence of a dominant strategy suggests to me that there should exist a unique Nash equilibrium here, which is not the case. Everyone defecting is a Nash equilibrium, but 50 people cooperating and 49 defecting is a Nash equilibrium as well, and a better one at that. Something (quite likely my intuition regarding Nash equilibria in games with more than 2 players) is off here. Also, it is of course possible to calculate the optimal probability that we should defect and I agree with FeepingCreature that this should be 0.5-e, where e depends on the size of the player base and goes to 0 when the player base becomes infinite. But I highly doubt that there’s an elegant formula for it. It seems (in my head at least) that already for, say, n=5 you have to do quite a bit of calculation, let alone n=99.
Nice. If we analyze the game using Vitalik’s 2x2 payoff matrix, defection is a dominant strategy. But now I see that’s not how game theorists would use this phrase. They would work with the full 99-dimensional matrix, and there defection is not a dominant strategy, because as you say, it’s a bad strategy if we know that 49 other people are cooperating, and 49 other people are defecting.
There’s a sleight of hands going on in Vitalik’s analysis, and it is located at the phrase “regardless of one’s epistemic beliefs [one is better off defecting]”. If my epistemic belief is that 49 other people are cooperating, and 49 other people are defecting, then it’s not true that defection is my best strategy. Of course, Vitalik’s 2x2 matrix just does not allow me to have such refined epistemic beliefs: I have to get by with “attack succeeds” versus “attack fails”.
Which kind of makes sense, because it’s true that I probably won’t find myself in a situation where I know for sure that 49 other people are cooperating, and 49 other people are defecting, so the correct game theoretic definition of dominant strategy is probably less relevant here than something like Vitalik’s “aggregate” version. Still, there are assumptions here that are not clear from the original analysis.
So, I did not forget about that particular case. In my particular brand of cryptoeconomic analysis, I try to decompose cooperation incentives into three types:
Incentives generated by the protocol
Altruism
Incentives arising from the desire to have the protocol succeed because one has a stake in it
I often group (2) and (3) into one category, “altruism-prime”, but here we can separate them.
The important point is that category 1 incentives are always present as long as the protocol specifies them, category 2 incentives are always present, but the size of category 3 incentives is proportional to the “probability of being pivotal” of each node—essentially, the probability that the node actually is in a situation where its activity will determine the outcome of the game.
Note that I do not consider 49⁄50 Nash equilibria realistic; in real massively multiplayer games, the level of confusion, asynchronicity, trembling hands/irrational players, bounded rationality, etc, is such that I think it’s impossible for such a finely targeted equilibrium to maintain itself (this is also the primary keystone of my case against standard and dominant assurance contracts). Hence why I prefer to think of the probability distribution on the number of players that will play a particular strategy and from there the probability of a single node being pivotal.
In the case of cryptoeconomic consensus protocols, I consider it desirable to achieve a hard bound of the form “the attacker must spend capital of at least C/k” where C is the amount of capital invested by all participants in the network and k is some constant. Since we cannot prove that the probability of being pivotal will be above any particular 1/k, I generally prefer to assume that it is simply zero (ie, the ideal environment of an infinite number of nodes of zero size). In this environment, my usage of “dominant strategy” is indeed fully correct. However, in cases where hostile parties are involved, I assume that the hostile parties are all colluding; this maximally hard double-standard is a sort of principle of charity that I believe we should hold to.