But a vote for a losing candidate is not “thrown away”; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote. Readers in non-swing states especially should consider what message they’re sending with their vote before voting for any candidate, in any election, that they don’t actually like.
But that point can still be subject to the same (invalid, IMHO) argument against voting: your vote alone is not going to change the poll’s percentages by any noticeable extent, hence you could as well not vote and nobody will notice the difference.
I’ll explain why I think this line of argument is invalid in another comment. EDIT: here
Also, rationalists are supposed to win. If we end up doing a fancy expected utility calculation and then neglect voting, all the while supposedly irrational voters ignore all of that and vote for their favored candidates and get them elected while ours lose… then that’s, well, losing.
That’s actually a better point, but it opens a can of worms: ideally, istrumentally rational agents should always win (or maximize their chance of winning, if uncertainty is involved), but does a consistent form of rationality that allows that actually exist?
Consider two pairs of players playing a standard one-shot prisoner’s dilemma, where the players are not allowed to credibly commit or communicate in any way.
In one case the players are both CooperateBots: they always cooperate because they think that God will punish them if they defect, or they feel a sense of tribal loyalty towards each other, or whatever else. These players win.
In the other case, the players are both utility maximizing rational agents. What outcome do they obtain?
By having two agents play the same game against different opposition, you compare two scenarios that may seem similar on the surface but are fundamentally different. Obviously, making sure your opponent cooperates is not part of PD, so you can’t call this winning. And as soon as you delve into the depths of meta-PD, where players can influence other players’ decisions beforehand and/or hand out additional punishment afterwards, like for example in most real life situations, the rational agents will devise methods by which mutual cooperation can be assured much better than by loyalty or altruism or whatever. Anyone moderately rational will cooperate if the PD matrix is “cooperate and get [whatever] or defect and have all your winnings taken away by the player community and given to the other player”, and accordingly win against irrational players, while any non-playing rationalist would support such kind of convention; although, depending on how/why PD games happen in the first place, this may evolve into “cooperate and have all winnings taken away by the player community or defect and additionally get punished in an unpleasant way”.
By the way, the term CooperateBot only really makes sense when talking about iterated PD, where it refers to an agent always cooperating regardless of the results of any previous rounds.
But that point can still be subject to the same (invalid, IMHO) argument against voting: your vote alone is not going to change the poll’s percentages by any noticeable extent, hence you could as well not vote and nobody will notice the difference.
I’ll explain why I think this line of argument is invalid in another comment. EDIT: here
That’s actually a better point, but it opens a can of worms: ideally, istrumentally rational agents should always win (or maximize their chance of winning, if uncertainty is involved), but does a consistent form of rationality that allows that actually exist?
Consider two pairs of players playing a standard one-shot prisoner’s dilemma, where the players are not allowed to credibly commit or communicate in any way.
In one case the players are both CooperateBots: they always cooperate because they think that God will punish them if they defect, or they feel a sense of tribal loyalty towards each other, or whatever else. These players win.
In the other case, the players are both utility maximizing rational agents. What outcome do they obtain?
By having two agents play the same game against different opposition, you compare two scenarios that may seem similar on the surface but are fundamentally different. Obviously, making sure your opponent cooperates is not part of PD, so you can’t call this winning. And as soon as you delve into the depths of meta-PD, where players can influence other players’ decisions beforehand and/or hand out additional punishment afterwards, like for example in most real life situations, the rational agents will devise methods by which mutual cooperation can be assured much better than by loyalty or altruism or whatever. Anyone moderately rational will cooperate if the PD matrix is “cooperate and get [whatever] or defect and have all your winnings taken away by the player community and given to the other player”, and accordingly win against irrational players, while any non-playing rationalist would support such kind of convention; although, depending on how/why PD games happen in the first place, this may evolve into “cooperate and have all winnings taken away by the player community or defect and additionally get punished in an unpleasant way”.
By the way, the term CooperateBot only really makes sense when talking about iterated PD, where it refers to an agent always cooperating regardless of the results of any previous rounds.