You do see that zero is the only Nash equilibrium, right? If everyone plays zero, you gain nothing by defecting alone, because 1/N is still better than nothing (and your guess will always be greater than 2⁄3 of the average).
So you’re arguing that it’s not rational, under the assumption of common rationality, to play the unique Nash equilibrium?
Everyone playing 0 is only better than everyone playing 67 because it corrects individual defectors. It doesn’t correct multiple coordinated defectors. If we know that there are no defectors, 67 is as good as 0, and if we know that there is some number of defectors who could conspire to play something else, 0 is not much better than 67. This becomes more interesting if the payoff on the non-equilibrium choices is greater.
Nash equilibrium is not a universal principle, it’s merely a measure against individual uncoordinated madmen, agents unable to cooperate.
it’s merely a measure against individual uncoordinated madmen
Actually, I think it does rather better against uncoordinated rational agents than it does against crazy people. I’m not sure why it should have any traction at all against the latter.
More generally, you’re right, but: (a) that didn’t seem to be the nature of lavalamp’s argument; and (b) unless it’s also incentive compatible in the standard sense, I tend to consider the possibility of coordination as changing the rules of the game (though that’s just a personal semantic preference).
By madmen I meant “rational” agents who refuse to consider an option or implications of coordination (the kind that requires no defection).
Impossibility of coordination is a nontrivial concept, I don’t quite understand what it means (I should...). If everyone follows a certain procedure that leads them to agree on 0, why can’t they agree on 67 just as well?
If everyone follows a certain procedure that leads them to agree on 0, why can’t they agree on 67 just as well?
Because given what others are doing, no individual has an incentive to deviate from 0 (regardless of whether they’ve agreed to it or not). In contrast, if they’re really trying to win, every individual agent has an incentive to deviate from 67.
ETA: You can get around the latter problem if you have an enforcement mechanism that lets you punish defectors; but that’s adding something not in the original set up, which is why I prefer to consider it changing the rules of the game.
Coordination determines the joint outcome, or some property of the joint outcome; possibility of defection means lack of total coordination for the given outcome. Punishment is only one of the possible ways of ensuring coordination (although the only realistic one for humans, in most cases). Between the two coordinated strategies, 67 is as good as 0.
What I wondered is what it could mean to establish the total lack of coordination, impossibility of implicit communication through running common algorithms, having common origin, sharing common biases, etc, so that the players literally can’t figure out a common answer in e.g. battle of the sexes).
I’m sure I’m missing your point, but FWIW my original claim was only about the (im)possibility of coordination on a non-Nash equilibrium solution (i.e. of coordinating on a solution that is not incentive-compatible). Coordinating on one of a number of Nash equilibria (which is the issue in battle of the sexes) is a different matter entirely (and not one I am claiming anything about).
Agreed. This is why I specified that I think there are others who also would value a unique win, and why, in another comment I mentioned that of those of us who value a unique win, someone has to guess high.
This leads to quite a nice dilemma (as we’d all prefer for someone else to guess high), unless we believe cousin_it, who says he guessed 100.
Assuming that the rewards (and/or penalties) were adjusted such that everyone greatly prefers a tie to a loss, then I would have to agree that 0 is the Nash equilibrium (and would guess 0).
However, given that the only available reward here is social capital (if even that), I’d rather win outright, even if it brings a risk of losing, and I don’t see why I would be alone in that order of preferences.
And I think I may be distorting the game as much as cousin_it, and equally unintentionally. Sorry...
Assuming that the rewards (and/or penalties) were adjusted such that everyone greatly prefers a tie to a loss, then I would have to agree that 0 is the Nash equilibrium (and would guess 0).
I think we’ve basically resolved this, but just to clear up loose ends, I’m pretty sure it will be a Nash equilibrium provided everyone strictly prefers a tie to a loss; as far as I can tell the preference shouldn’t need to be “great”.
It has to be great enough to make me unwilling to risk a loss for the possibility of an outright win, which is why I said “greatly.” But I suppose it’s relative.
Nash equilibrium doesn’t work like that. Each player’s strategy must be optimal given perfect knowledge of others’ equilibrium strategies. Your probabilistic reasoning only applies if you don’t know others’ equilibrium strategies (or if they’re playing mixed strategies#Mixed_strategy), but that isn’t relevant here).
Sorry, context switch on my part—I wasn’t thinking about Nash equilibrium when I wrote that.
But I still don’t see your point—if I assume that everyone’s utility function is exactly like mine, I don’t see how my probabilistic reasoning would differ from an equilibrium strategy, if I’m using the term right.
if I assume that everyone’s utility function is exactly like mine
Did you just switch context again? My claim is about what happens if everyone strictly prefers to tie rather than to lose. In this case, given others’ strategies, any individual’s optimal strategy is to answer 2⁄3 of the average. The only way everyone can answer 2⁄3 of the average is if everyone plays 0, and this is the only strategy that nobody has an incentive to deviate from.
Maybe I’m being dense, but bear with me for a moment....
Assume: I get X utilons from winning, Y from tying, and Z from losing, where X >= Y >= Z. Everyone plying the game has exactly the same preferences.
If I (and everyone else) play 0, I get Y utilons. Straightforward.
If I play a value that gives me W chance of winning outright, and (1-W) chance of losing (with an inconsequential chance of tying because I added a small random offset), I will gain W X - (1 - W) Z utilons on average.
Assume W is fairly low, the worst and most likely case being 1/N where N is the number of participants, since we’re assuming everyone is exactly like me.
Therefore, if Y > (X/N—Z + Z/N), I (and everyone) should play 0. Otherwise, we should play the thing that gives us W chance of winning. (hopefully I did the algebra right)
So, depending on the values for X, Y, and Z (and N), we could get your scenario or mine.
If Y is close to X, we get yours. If it is greatly lower than X, we will probably get mine.
All that to say I can create a scenario where the Nash equilibrium really is for everyone to play a small positive number by tweaking the players’ utility functions, even given the constraint that winning, tying, and losing are valued in that order.
If this is clear to you, then we’ve been talking past each other. If not, then I don’t understand Nash equilibrium very well (or I’m an incredibly sucky writer).
EDIT: on second thought, I think my math is probably quite bad, esp. with respect to Z. Anyway, perhaps the central idea of my post is still intelligible, so I’ll leave it be.
EDIT2: Ah, I got a sign backwards (consider that if the penalty for losing is your house gets burned down, Z is a large negative number).
There are some games that don’t have a Nash equilibrium. Consider a 1-player game where the available strategies are the numbers between 0 and 1, and your payoff is 1-x if you pick x>0 and 0 if you pick x=0. There is no Nash equilibrium.
If many players assign 0 utilons to tying and losing in this game, and 1 to winning, then 0 is still a Nash equilibrium, but if there is any positive chance that some gimp will submit a nonzero answer just for the hell of it, then you definately shouldn’t play zero.
By the way, I guessed 100. I’m not very good with numbers—I think 100 is the best answer, right ;-0
A Nash equilibrium is a set of strategies from which no player has an incentive to deviate, holding others’ strategies constant. Take any putative set of (pure) equilibrium strategies; if there is any individual who loses when this set of strategies is played, then they have an incentive to change their guess to 2⁄3 of the average, and this set of strategies is not a Nash equilibrium. This implies that you are not in Nash equilibrium unless everyone wins.*
Holding other players’ strategies constant, you have a single optimal strategy, which is to play 2⁄3 of the average. If there is another player who has already guessed 2⁄3 of the (new) average then you tie with probability 1; if there is not, you win with probability 1.
* Note that everyone winning is necessary, but not sufficient for a Nash equilibrium. Everyone playing 67 lets everyone win, but it is not a Nash equilibrium. If if anyone prefers not to tie, they could deviate and win by themselves.
So games in which there cannot be a tie have no Nash equilibrium?
I must have misread the wikipedia page; I thought the requirement was that there’s no way to do better with an alternative strategy.
I was also assuming that everyone guesses at the same time, as otherwise the person to play last can always win (and so everyone will play 0). But this means it’s no longer a perfect-information game, and that there’s not going to be a Nash equilibrium. Thanks for your patience :)
So games in which there cannot be a tie have no Nash equilibrium?
No, that’s not a general rule. It’s just the case that in this particular game, if you’re losing you always have a better option that can be achieved just by changing your own strategy. If your prospects for improvement relied on others changing their strategies too, then you could lose and still be in a Nash equilibrium. (For an example of such a game, see battle of the sexes))
I thought the requirement was that there’s no way to do better with an alternative strategy.
Sort of. It’s that there’s no way to do better with an alternative strategy, given perfect knowledge of others’ strategies.
I was also assuming that everyone guesses at the same time
They do in the actual game; it’s just that that’s not relevant to evaluating what counts as a Nash equilibrium.
But this means it’s no longer a perfect-information game, and that there’s not going to be a Nash equilibrium.
I’m not entirely clear what you mean by the first half of this sentence, but the conclusion is false. Even if everyone guessed in turn, there would still be a Nash equilibrium with everyone playing zero.
So you’re arguing that it’s not rational, under the assumption of common rationality, to play the unique Nash equilibrium?
Is making an “assumption of common rationality” really a rational choice, even here?
With the stakes as low as this, I would assign a very high likelihood to someone getting greater utility from throwing a spanner in the works for the lulz than from a serious attempt at winning, even if at least one such person hadn’t already announced their action.
Is making an “assumption of common rationality” really a rational choice, even here?
FWIW, I never suggested it was. Lavalamp claimed that zero was not rational under the assumptions in the OP’s original justification, one of which was common rationality. It was the validity of that argument I was defending; not it’s soundness.
The purpose of this game, admittedly, is to test just how complacent / obedient the Overcoming Bias / Less Wrong community has become.
Think about your assumptions:
First you’ve got “common rationality”. But that’s really a smokescreen to hide the fact that you’re using a utility function and simply, dearly, hoping that everybody else is using the same one as you!
Your second assumption is that “you gain nothing by defecting alone”.
There’s no meaningful sense in which you’re “winning” if everybody guesses zero and you do too. The only purpose of it, the only reward you receive for guessing 0 and ‘winning’, is the satisfaction that you dutifully followed instructions and submitted the ‘correct’ answer according to game theory and the arguments put forth by upper echelons of the Less Wrong community.
In fact, there is much to gain by guessing a non-zero number. First of all, it costs nothing to play. Right away, all of your game theory and rationalization is tossed right out the window. It is of no cost to submit an answer of 100, or even to submit several answers of 100. Your theory of games can’t account for this—if people get multiple guesses, submitted from different accounts, you’ll be pretty silly with your submission of 0 as an answer.
“But that would be cheating.” Well, no. See, the game is a cheat. It’s to test “Aumann’s agreement theorem” among this community here. It’s to test whether or not you will follow instructions and run with the herd, buying into garbage about a ‘common rationality’ and ‘unique solutions’, ‘utility functions’ and such.
You see, for me at least, there’s great value in defecting. You of course will try to scare people into believing they’re defecting alone, but here you’re presupposing the results of the experiment—that everybody else is dutifully following instructions. So anyway, I would be greatly pleased if the result turned out to be a non-zero number. It would restore my faith in this community, actually. And to that end, I would submit a high number...
I think that you are profoundly mistaken about the attitudes and dispositions of the vast majority here. You appear to be new, so that’s understandable. As you look around, though, you’ll find a wide array of opinions on the limits of causal decision theory, the aptness of utility functions for describing or prescribing human action, and other topics you assume must be dogma for a community calling itself ‘rationalist’. You might even experience the uncomfortable realization that other people already agree with some of the brilliant revelations about rationality that you’ve derived.
I was an avid visitor of Overcoming Bias, but yes I am new to Less Wrong. I had assumed that the general feel of this place would be similar to Overcoming Bias—much of which was very dogmatic, although there were a few notable voices of dissent (several of whom were censored and even banned).
You might even experience the uncomfortable realization that other people already agree with some of the brilliant revelations about rationality that you’ve derived.
Obviously. But there wouldn’t be a point to my lecturing them, now would there? No, conchis made the canonical argument and I responded. And if you weren’t so uncomfortable with my dissent you might have left a real response, instead of this patronizing and sarcastic analysis.
That’s the problem with the internet: “I’m witty and incisive, you’re sarcastic and sanctimonious”. I’ll admit the tenor of my last sentence was out of line; but I stand by the assertion that your psychoanalysis of this group is well off the mark.
Also, what exactly is so awful about a group norm of playing certain games seriously even when for zero stakes, in order to gather interesting information about the group dynamics of aspiring rationalists?
“I’m witty and incisive, you’re sarcastic and sanctimonious”
Pretty much nails it. pswoo’s initial comment was fairly patronizing itself, so it seems a bit rich to criticise you (orthonormal) for playing along. But whatever.
By way of substantive response. Um, yeah. So, patronizing bits aside, I agree with much of your (pswoo’s) comment. I just don’t think it was especially relevant to the particular conversation you (pswoo) intervened in, which was about the validity of the standard argument rather than its soundness.
You do see that zero is the only Nash equilibrium, right? If everyone plays zero, you gain nothing by defecting alone, because 1/N is still better than nothing (and your guess will always be greater than 2⁄3 of the average).
So you’re arguing that it’s not rational, under the assumption of common rationality, to play the unique Nash equilibrium?
Everyone playing 0 is only better than everyone playing 67 because it corrects individual defectors. It doesn’t correct multiple coordinated defectors. If we know that there are no defectors, 67 is as good as 0, and if we know that there is some number of defectors who could conspire to play something else, 0 is not much better than 67. This becomes more interesting if the payoff on the non-equilibrium choices is greater.
Nash equilibrium is not a universal principle, it’s merely a measure against individual uncoordinated madmen, agents unable to cooperate.
Actually, I think it does rather better against uncoordinated rational agents than it does against crazy people. I’m not sure why it should have any traction at all against the latter.
More generally, you’re right, but: (a) that didn’t seem to be the nature of lavalamp’s argument; and (b) unless it’s also incentive compatible in the standard sense, I tend to consider the possibility of coordination as changing the rules of the game (though that’s just a personal semantic preference).
By madmen I meant “rational” agents who refuse to consider an option or implications of coordination (the kind that requires no defection).
Impossibility of coordination is a nontrivial concept, I don’t quite understand what it means (I should...). If everyone follows a certain procedure that leads them to agree on 0, why can’t they agree on 67 just as well?
Because given what others are doing, no individual has an incentive to deviate from 0 (regardless of whether they’ve agreed to it or not). In contrast, if they’re really trying to win, every individual agent has an incentive to deviate from 67.
ETA: You can get around the latter problem if you have an enforcement mechanism that lets you punish defectors; but that’s adding something not in the original set up, which is why I prefer to consider it changing the rules of the game.
Coordination determines the joint outcome, or some property of the joint outcome; possibility of defection means lack of total coordination for the given outcome. Punishment is only one of the possible ways of ensuring coordination (although the only realistic one for humans, in most cases). Between the two coordinated strategies, 67 is as good as 0.
What I wondered is what it could mean to establish the total lack of coordination, impossibility of implicit communication through running common algorithms, having common origin, sharing common biases, etc, so that the players literally can’t figure out a common answer in e.g. battle of the sexes).
I’m sure I’m missing your point, but FWIW my original claim was only about the (im)possibility of coordination on a non-Nash equilibrium solution (i.e. of coordinating on a solution that is not incentive-compatible). Coordinating on one of a number of Nash equilibria (which is the issue in battle of the sexes) is a different matter entirely (and not one I am claiming anything about).
Agreed. This is why I specified that I think there are others who also would value a unique win, and why, in another comment I mentioned that of those of us who value a unique win, someone has to guess high.
This leads to quite a nice dilemma (as we’d all prefer for someone else to guess high), unless we believe cousin_it, who says he guessed 100.
Assuming that the rewards (and/or penalties) were adjusted such that everyone greatly prefers a tie to a loss, then I would have to agree that 0 is the Nash equilibrium (and would guess 0).
However, given that the only available reward here is social capital (if even that), I’d rather win outright, even if it brings a risk of losing, and I don’t see why I would be alone in that order of preferences.
And I think I may be distorting the game as much as cousin_it, and equally unintentionally. Sorry...
I think we’ve basically resolved this, but just to clear up loose ends, I’m pretty sure it will be a Nash equilibrium provided everyone strictly prefers a tie to a loss; as far as I can tell the preference shouldn’t need to be “great”.
It has to be great enough to make me unwilling to risk a loss for the possibility of an outright win, which is why I said “greatly.” But I suppose it’s relative.
Nash equilibrium doesn’t work like that. Each player’s strategy must be optimal given perfect knowledge of others’ equilibrium strategies. Your probabilistic reasoning only applies if you don’t know others’ equilibrium strategies (or if they’re playing mixed strategies#Mixed_strategy), but that isn’t relevant here).
Sorry, context switch on my part—I wasn’t thinking about Nash equilibrium when I wrote that.
But I still don’t see your point—if I assume that everyone’s utility function is exactly like mine, I don’t see how my probabilistic reasoning would differ from an equilibrium strategy, if I’m using the term right.
Did you just switch context again? My claim is about what happens if everyone strictly prefers to tie rather than to lose. In this case, given others’ strategies, any individual’s optimal strategy is to answer 2⁄3 of the average. The only way everyone can answer 2⁄3 of the average is if everyone plays 0, and this is the only strategy that nobody has an incentive to deviate from.
Maybe I’m being dense, but bear with me for a moment....
Assume: I get X utilons from winning, Y from tying, and Z from losing, where X >= Y >= Z. Everyone plying the game has exactly the same preferences.
If I (and everyone else) play 0, I get Y utilons. Straightforward.
If I play a value that gives me W chance of winning outright, and (1-W) chance of losing (with an inconsequential chance of tying because I added a small random offset), I will gain W X - (1 - W) Z utilons on average.
Assume W is fairly low, the worst and most likely case being 1/N where N is the number of participants, since we’re assuming everyone is exactly like me.
Therefore, if Y > (X/N—Z + Z/N), I (and everyone) should play 0. Otherwise, we should play the thing that gives us W chance of winning. (hopefully I did the algebra right)
So, depending on the values for X, Y, and Z (and N), we could get your scenario or mine.
If Y is close to X, we get yours. If it is greatly lower than X, we will probably get mine.
All that to say I can create a scenario where the Nash equilibrium really is for everyone to play a small positive number by tweaking the players’ utility functions, even given the constraint that winning, tying, and losing are valued in that order.
If this is clear to you, then we’ve been talking past each other. If not, then I don’t understand Nash equilibrium very well (or I’m an incredibly sucky writer).
EDIT: on second thought, I think my math is probably quite bad, esp. with respect to Z. Anyway, perhaps the central idea of my post is still intelligible, so I’ll leave it be.
EDIT2: Ah, I got a sign backwards (consider that if the penalty for losing is your house gets burned down, Z is a large negative number).
W X - (1 - W) Z should be W X + (1 - W) Z
Y > (X/N—Z + Z/N) should be Y > (X/N + Z—Z/N)
There are some games that don’t have a Nash equilibrium. Consider a 1-player game where the available strategies are the numbers between 0 and 1, and your payoff is 1-x if you pick x>0 and 0 if you pick x=0. There is no Nash equilibrium.
If many players assign 0 utilons to tying and losing in this game, and 1 to winning, then 0 is still a Nash equilibrium, but if there is any positive chance that some gimp will submit a nonzero answer just for the hell of it, then you definately shouldn’t play zero.
By the way, I guessed 100. I’m not very good with numbers—I think 100 is the best answer, right ;-0
A Nash equilibrium is a set of strategies from which no player has an incentive to deviate, holding others’ strategies constant. Take any putative set of (pure) equilibrium strategies; if there is any individual who loses when this set of strategies is played, then they have an incentive to change their guess to 2⁄3 of the average, and this set of strategies is not a Nash equilibrium. This implies that you are not in Nash equilibrium unless everyone wins.*
Holding other players’ strategies constant, you have a single optimal strategy, which is to play 2⁄3 of the average. If there is another player who has already guessed 2⁄3 of the (new) average then you tie with probability 1; if there is not, you win with probability 1.
* Note that everyone winning is necessary, but not sufficient for a Nash equilibrium. Everyone playing 67 lets everyone win, but it is not a Nash equilibrium. If if anyone prefers not to tie, they could deviate and win by themselves.
So games in which there cannot be a tie have no Nash equilibrium?
I must have misread the wikipedia page; I thought the requirement was that there’s no way to do better with an alternative strategy.
I was also assuming that everyone guesses at the same time, as otherwise the person to play last can always win (and so everyone will play 0). But this means it’s no longer a perfect-information game, and that there’s not going to be a Nash equilibrium. Thanks for your patience :)
No, that’s not a general rule. It’s just the case that in this particular game, if you’re losing you always have a better option that can be achieved just by changing your own strategy. If your prospects for improvement relied on others changing their strategies too, then you could lose and still be in a Nash equilibrium. (For an example of such a game, see battle of the sexes))
Sort of. It’s that there’s no way to do better with an alternative strategy, given perfect knowledge of others’ strategies.
They do in the actual game; it’s just that that’s not relevant to evaluating what counts as a Nash equilibrium.
I’m not entirely clear what you mean by the first half of this sentence, but the conclusion is false. Even if everyone guessed in turn, there would still be a Nash equilibrium with everyone playing zero.
No problem. ;)
Sorry I didn’t/can’t continue the conversation; I’ve gotten rather busy.
Is making an “assumption of common rationality” really a rational choice, even here?
With the stakes as low as this, I would assign a very high likelihood to someone getting greater utility from throwing a spanner in the works for the lulz than from a serious attempt at winning, even if at least one such person hadn’t already announced their action.
FWIW, I never suggested it was. Lavalamp claimed that zero was not rational under the assumptions in the OP’s original justification, one of which was common rationality. It was the validity of that argument I was defending; not it’s soundness.
The purpose of this game, admittedly, is to test just how complacent / obedient the Overcoming Bias / Less Wrong community has become.
Think about your assumptions:
First you’ve got “common rationality”. But that’s really a smokescreen to hide the fact that you’re using a utility function and simply, dearly, hoping that everybody else is using the same one as you!
Your second assumption is that “you gain nothing by defecting alone”.
There’s no meaningful sense in which you’re “winning” if everybody guesses zero and you do too. The only purpose of it, the only reward you receive for guessing 0 and ‘winning’, is the satisfaction that you dutifully followed instructions and submitted the ‘correct’ answer according to game theory and the arguments put forth by upper echelons of the Less Wrong community.
In fact, there is much to gain by guessing a non-zero number. First of all, it costs nothing to play. Right away, all of your game theory and rationalization is tossed right out the window. It is of no cost to submit an answer of 100, or even to submit several answers of 100. Your theory of games can’t account for this—if people get multiple guesses, submitted from different accounts, you’ll be pretty silly with your submission of 0 as an answer.
“But that would be cheating.” Well, no. See, the game is a cheat. It’s to test “Aumann’s agreement theorem” among this community here. It’s to test whether or not you will follow instructions and run with the herd, buying into garbage about a ‘common rationality’ and ‘unique solutions’, ‘utility functions’ and such.
You see, for me at least, there’s great value in defecting. You of course will try to scare people into believing they’re defecting alone, but here you’re presupposing the results of the experiment—that everybody else is dutifully following instructions. So anyway, I would be greatly pleased if the result turned out to be a non-zero number. It would restore my faith in this community, actually. And to that end, I would submit a high number...
If I were to play.
I think that you are profoundly mistaken about the attitudes and dispositions of the vast majority here. You appear to be new, so that’s understandable. As you look around, though, you’ll find a wide array of opinions on the limits of causal decision theory, the aptness of utility functions for describing or prescribing human action, and other topics you assume must be dogma for a community calling itself ‘rationalist’. You might even experience the uncomfortable realization that other people already agree with some of the brilliant revelations about rationality that you’ve derived.
I was an avid visitor of Overcoming Bias, but yes I am new to Less Wrong. I had assumed that the general feel of this place would be similar to Overcoming Bias—much of which was very dogmatic, although there were a few notable voices of dissent (several of whom were censored and even banned).
Obviously. But there wouldn’t be a point to my lecturing them, now would there? No, conchis made the canonical argument and I responded. And if you weren’t so uncomfortable with my dissent you might have left a real response, instead of this patronizing and sarcastic analysis.
That’s the problem with the internet: “I’m witty and incisive, you’re sarcastic and sanctimonious”. I’ll admit the tenor of my last sentence was out of line; but I stand by the assertion that your psychoanalysis of this group is well off the mark.
Also, what exactly is so awful about a group norm of playing certain games seriously even when for zero stakes, in order to gather interesting information about the group dynamics of aspiring rationalists?
Pretty much nails it. pswoo’s initial comment was fairly patronizing itself, so it seems a bit rich to criticise you (orthonormal) for playing along. But whatever.
By way of substantive response. Um, yeah. So, patronizing bits aside, I agree with much of your (pswoo’s) comment. I just don’t think it was especially relevant to the particular conversation you (pswoo) intervened in, which was about the validity of the standard argument rather than its soundness.
I will be very surprised if more than half of the answers are 0.