There’s no need to invoke any kind of fancy “superrationality.” There’s just a conflict between individual rationality and group rationality.
As a leader or activist, it’s in my interest to believe and say things like “Yay voting!” because that helps me lead mobs of people and achieve the election results I prefer.
As an individual private citizen, it’s in my interest to stay home and donate to charity, because my vote has much less than a 1⁄100,000,000 chance of swinging an election: history shows that national voting preferences are drawn from a curve that is much more like a normal distribution than a uniform distribution, and the peak of the normal curve in any given election is going to be off by 1-5% of registered voters based on economic data, approval ratings, etc. In other words, a standard analytical rationalist should be able to predict that less than 1⁄100,000,000th of the curve of possible election turnouts fit under the exact 50-50 tie that you would need for your vote to matter in any straightforward instrumental sense. If you would take 5:4 odds in favor of either candidate, you shouldn’t be betting that the race will end in a tie even at odds of 100,000,000 to 1, and if you can’t figure out who to take 5:4 odds for by Election Day, then you haven’t read Green & Gerber’s paper.
More importantly, even if there were a statistical tie, it would be settled by recounts, fraud, and the appeals process. See, e.g., 2000, 1960, and 1876. Elections that are within a few thousand votes come down to a contest of political will, sly manipulation, and spin waged among professional political operatives and election monitors/officials. If you really want to invest a few hours of your time to swing an election, you shouldn’t bother voting: you should volunteer to monitor a polling station.
I have no idea how to even begin analyzing standard vs. Hofstaderian decision theory, but it doesn’t matter for the example of national elections. A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote. Possibly in some bizarre cyberpunk future that we may all yet live to see, Omega-like beings will create fanciful situations in which voting causes others to vote, but in the meantime mucking about with timeless decision theory to explain voting behavior is like applying relativistic mechanics to a high school track meet.
A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote.
I think this is close to the mark, but not exactly correct: a truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, use the extra 2 hours* however she likes, and whenever the subject of voting came up in regular everyday conversation she’d urge others to get informed and vote (or just refrain from discouraging them, if she’s uncomfortable/unskilled with hypocrisy).
* Two hours for voting? Whoa. Do you live in a crowded city, or very far away from the nearest station?
I agree that there is a conflict between individual rationality and group rationality, but what is the word “just” doing in there? Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
I wonder if there is a connection to the cost allocation problem in management accounting. In the electoral case, if faction A win by a good margin and each member is $1000 better off because of the policies their man enacts, should they allocate $500 dollars profit to each of the two hours they spent voting and think themselves handsomely rewarded for their efforts, or should they look at the good margin and say “Two hours wasted, I could have stayed home and we would still have won.” In the business case there is a fixed cost A for a machine and a marginal cost B per unit of production. So the cost model is A+Bv. Obviously the business wants the sales force to go out there and sell and get price P and volume V so that A+BV < PV. One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
I have just ordered Relevance Lost because I suspect there is a lot of history here, with people going round in circles trying to solve this problem. (Since I’ve ordered a second copy for £1 + £2.75 postage I’m not risking much money on this suspicion :-)
One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
Right, but the conflict is for the manager alone to solve; the manager’s challenge is to create incentives that will encourage salespeople to further the company’s goals. The salespeople face no such challenge; their goal is (or should be) to do their job well with a minimum of time and effort.
Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
With respect: no, it doesn’t. Everyone might wish that individual and group rationality would dovetail, but wishing doesn’t make it so. The whole of political economics—the study of governments, cartels, unions, mafias, and interest groups -- is an attempt to cope with the lack of consistency constraints. Governments exist not just because people are irrational, but also because rational people often choose to behave in their narrow self-interest.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period. Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period.
But real flesh-and-blood humans are never in a true PD situation. They are in something more like an iterated PD—it is never a one-shot. If I choose not to vote, my neighbor knows—she works as a clerk at the polling place. If I belong to a union, my shop steward will know whether I have voted, because my union has poll-watchers.
Of course; that’s right. Sometimes the fear of detection or the hope of establishing long-term cooperation will get you out of what otherwise appears to be a PD. Other times, it won’t—if you see an abandoned laptop at a scenic view pull-over on a recreational road trip, you’re pretty much dealing with a one-shot PD. If you return the laptop, it’s because you empathize with the owner or believe in karma, and not because you’re afraid that the laptop owner won’t return your laptop the next time around.
Still, it’s important not to believe that individual and collective rationality magically match up—that belief can lead to all kinds of honest but horribly tragic mistakes, like thinking that peasants will exert significant effort at farming when placed in a Trotsky-style commune.
I would upvote you thrice if I could. An overwhelming number of time-tested social dynamics, to say nothing of deliberately designed laws, can be seen to have arisen as anti-PD measures.
It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period.
No. At least, not with the period. It depends who you are in the prison with and your respective abilities for prediction. (True PD does not imply participants are human.)
Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
There’s no need to invoke any kind of fancy “superrationality.” There’s just a conflict between individual rationality and group rationality.
As a leader or activist, it’s in my interest to believe and say things like “Yay voting!” because that helps me lead mobs of people and achieve the election results I prefer.
As an individual private citizen, it’s in my interest to stay home and donate to charity, because my vote has much less than a 1⁄100,000,000 chance of swinging an election: history shows that national voting preferences are drawn from a curve that is much more like a normal distribution than a uniform distribution, and the peak of the normal curve in any given election is going to be off by 1-5% of registered voters based on economic data, approval ratings, etc. In other words, a standard analytical rationalist should be able to predict that less than 1⁄100,000,000th of the curve of possible election turnouts fit under the exact 50-50 tie that you would need for your vote to matter in any straightforward instrumental sense. If you would take 5:4 odds in favor of either candidate, you shouldn’t be betting that the race will end in a tie even at odds of 100,000,000 to 1, and if you can’t figure out who to take 5:4 odds for by Election Day, then you haven’t read Green & Gerber’s paper.
More importantly, even if there were a statistical tie, it would be settled by recounts, fraud, and the appeals process. See, e.g., 2000, 1960, and 1876. Elections that are within a few thousand votes come down to a contest of political will, sly manipulation, and spin waged among professional political operatives and election monitors/officials. If you really want to invest a few hours of your time to swing an election, you shouldn’t bother voting: you should volunteer to monitor a polling station.
I have no idea how to even begin analyzing standard vs. Hofstaderian decision theory, but it doesn’t matter for the example of national elections. A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote. Possibly in some bizarre cyberpunk future that we may all yet live to see, Omega-like beings will create fanciful situations in which voting causes others to vote, but in the meantime mucking about with timeless decision theory to explain voting behavior is like applying relativistic mechanics to a high school track meet.
I think this is close to the mark, but not exactly correct: a truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, use the extra 2 hours* however she likes, and whenever the subject of voting came up in regular everyday conversation she’d urge others to get informed and vote (or just refrain from discouraging them, if she’s uncomfortable/unskilled with hypocrisy).
* Two hours for voting? Whoa. Do you live in a crowded city, or very far away from the nearest station?
[grin] I live(d) in Broward County, Florida. You may have heard of its stellar reputation for effective polling.
I agree that there is a conflict between individual rationality and group rationality, but what is the word “just” doing in there? Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
I wonder if there is a connection to the cost allocation problem in management accounting. In the electoral case, if faction A win by a good margin and each member is $1000 better off because of the policies their man enacts, should they allocate $500 dollars profit to each of the two hours they spent voting and think themselves handsomely rewarded for their efforts, or should they look at the good margin and say “Two hours wasted, I could have stayed home and we would still have won.” In the business case there is a fixed cost A for a machine and a marginal cost B per unit of production. So the cost model is A+Bv. Obviously the business wants the sales force to go out there and sell and get price P and volume V so that A+BV < PV. One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
I have just ordered Relevance Lost because I suspect there is a lot of history here, with people going round in circles trying to solve this problem. (Since I’ve ordered a second copy for £1 + £2.75 postage I’m not risking much money on this suspicion :-)
Right, but the conflict is for the manager alone to solve; the manager’s challenge is to create incentives that will encourage salespeople to further the company’s goals. The salespeople face no such challenge; their goal is (or should be) to do their job well with a minimum of time and effort.
With respect: no, it doesn’t. Everyone might wish that individual and group rationality would dovetail, but wishing doesn’t make it so. The whole of political economics—the study of governments, cartels, unions, mafias, and interest groups -- is an attempt to cope with the lack of consistency constraints. Governments exist not just because people are irrational, but also because rational people often choose to behave in their narrow self-interest.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period. Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
But real flesh-and-blood humans are never in a true PD situation. They are in something more like an iterated PD—it is never a one-shot. If I choose not to vote, my neighbor knows—she works as a clerk at the polling place. If I belong to a union, my shop steward will know whether I have voted, because my union has poll-watchers.
Of course; that’s right. Sometimes the fear of detection or the hope of establishing long-term cooperation will get you out of what otherwise appears to be a PD. Other times, it won’t—if you see an abandoned laptop at a scenic view pull-over on a recreational road trip, you’re pretty much dealing with a one-shot PD. If you return the laptop, it’s because you empathize with the owner or believe in karma, and not because you’re afraid that the laptop owner won’t return your laptop the next time around.
Still, it’s important not to believe that individual and collective rationality magically match up—that belief can lead to all kinds of honest but horribly tragic mistakes, like thinking that peasants will exert significant effort at farming when placed in a Trotsky-style commune.
I would upvote you thrice if I could. An overwhelming number of time-tested social dynamics, to say nothing of deliberately designed laws, can be seen to have arisen as anti-PD measures.
No. At least, not with the period. It depends who you are in the prison with and your respective abilities for prediction. (True PD does not imply participants are human.)
I agree on this as well as your general point.