I’m astonished that this comment has been voted down. The comment speaks bluntly, saying that standard classical causal decision theory is “wrong”. Is the problem with standard classical causal decision theory really that bad? Yes!
Consider an election in which the members of faction A have accept decision theory and each individual in A takes a cold-blooded decision that it is not worth his while to vote. Meanwhile faction B is made up of ordinary dumb schmucks. They believe that they have a moral duty to vote, except that half of them conveniently forget. Of those that remember, half get distracted by something good on TV. That still leaves a hard core, only half of whom are put off by the rain.
The outcome of the election is that faction B wins with a turnout of one eighth of its members versus faction A which failed to get the vote out at all.
Now comes the “tricky” bit; sticking labels on the factions. Which do we label “rational” and which do we label “irrational”. If rationalists win, we had better stick the label “rational” on B and “irrational” on A. “There is a lot of rocking and rolling up here, this is not a telemetry problem.”
So far I just stated the paradox of voting, but there is a problem for Less Wrong and not just for voting. Ordinary people know fine well that you have to get off your arse and go and vote. If we on Less Wrong will not admit that there is a problem and simply repeat “voting is irrational” ordinary people will correctly conclude that we have disappeared up into our ivory tower where we can believe in stupid theories that don’t work in the real world.
There’s no need to invoke any kind of fancy “superrationality.” There’s just a conflict between individual rationality and group rationality.
As a leader or activist, it’s in my interest to believe and say things like “Yay voting!” because that helps me lead mobs of people and achieve the election results I prefer.
As an individual private citizen, it’s in my interest to stay home and donate to charity, because my vote has much less than a 1⁄100,000,000 chance of swinging an election: history shows that national voting preferences are drawn from a curve that is much more like a normal distribution than a uniform distribution, and the peak of the normal curve in any given election is going to be off by 1-5% of registered voters based on economic data, approval ratings, etc. In other words, a standard analytical rationalist should be able to predict that less than 1⁄100,000,000th of the curve of possible election turnouts fit under the exact 50-50 tie that you would need for your vote to matter in any straightforward instrumental sense. If you would take 5:4 odds in favor of either candidate, you shouldn’t be betting that the race will end in a tie even at odds of 100,000,000 to 1, and if you can’t figure out who to take 5:4 odds for by Election Day, then you haven’t read Green & Gerber’s paper.
More importantly, even if there were a statistical tie, it would be settled by recounts, fraud, and the appeals process. See, e.g., 2000, 1960, and 1876. Elections that are within a few thousand votes come down to a contest of political will, sly manipulation, and spin waged among professional political operatives and election monitors/officials. If you really want to invest a few hours of your time to swing an election, you shouldn’t bother voting: you should volunteer to monitor a polling station.
I have no idea how to even begin analyzing standard vs. Hofstaderian decision theory, but it doesn’t matter for the example of national elections. A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote. Possibly in some bizarre cyberpunk future that we may all yet live to see, Omega-like beings will create fanciful situations in which voting causes others to vote, but in the meantime mucking about with timeless decision theory to explain voting behavior is like applying relativistic mechanics to a high school track meet.
A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote.
I think this is close to the mark, but not exactly correct: a truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, use the extra 2 hours* however she likes, and whenever the subject of voting came up in regular everyday conversation she’d urge others to get informed and vote (or just refrain from discouraging them, if she’s uncomfortable/unskilled with hypocrisy).
* Two hours for voting? Whoa. Do you live in a crowded city, or very far away from the nearest station?
I agree that there is a conflict between individual rationality and group rationality, but what is the word “just” doing in there? Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
I wonder if there is a connection to the cost allocation problem in management accounting. In the electoral case, if faction A win by a good margin and each member is $1000 better off because of the policies their man enacts, should they allocate $500 dollars profit to each of the two hours they spent voting and think themselves handsomely rewarded for their efforts, or should they look at the good margin and say “Two hours wasted, I could have stayed home and we would still have won.” In the business case there is a fixed cost A for a machine and a marginal cost B per unit of production. So the cost model is A+Bv. Obviously the business wants the sales force to go out there and sell and get price P and volume V so that A+BV < PV. One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
I have just ordered Relevance Lost because I suspect there is a lot of history here, with people going round in circles trying to solve this problem. (Since I’ve ordered a second copy for £1 + £2.75 postage I’m not risking much money on this suspicion :-)
One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
Right, but the conflict is for the manager alone to solve; the manager’s challenge is to create incentives that will encourage salespeople to further the company’s goals. The salespeople face no such challenge; their goal is (or should be) to do their job well with a minimum of time and effort.
Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
With respect: no, it doesn’t. Everyone might wish that individual and group rationality would dovetail, but wishing doesn’t make it so. The whole of political economics—the study of governments, cartels, unions, mafias, and interest groups -- is an attempt to cope with the lack of consistency constraints. Governments exist not just because people are irrational, but also because rational people often choose to behave in their narrow self-interest.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period. Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period.
But real flesh-and-blood humans are never in a true PD situation. They are in something more like an iterated PD—it is never a one-shot. If I choose not to vote, my neighbor knows—she works as a clerk at the polling place. If I belong to a union, my shop steward will know whether I have voted, because my union has poll-watchers.
Of course; that’s right. Sometimes the fear of detection or the hope of establishing long-term cooperation will get you out of what otherwise appears to be a PD. Other times, it won’t—if you see an abandoned laptop at a scenic view pull-over on a recreational road trip, you’re pretty much dealing with a one-shot PD. If you return the laptop, it’s because you empathize with the owner or believe in karma, and not because you’re afraid that the laptop owner won’t return your laptop the next time around.
Still, it’s important not to believe that individual and collective rationality magically match up—that belief can lead to all kinds of honest but horribly tragic mistakes, like thinking that peasants will exert significant effort at farming when placed in a Trotsky-style commune.
I would upvote you thrice if I could. An overwhelming number of time-tested social dynamics, to say nothing of deliberately designed laws, can be seen to have arisen as anti-PD measures.
It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period.
No. At least, not with the period. It depends who you are in the prison with and your respective abilities for prediction. (True PD does not imply participants are human.)
Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
And this argument has what to do with my personal decision to vote?
My choice does not determine the choices of others who believe like me, unless I’m a lot more popular than I think I am.
After saying voting is irrational, the next step for someone who truly cares about political change is to go figure out what the maximal political change they can get for their limited resources is—what’s the most efficient way to translate time or dollars into change. I believe that various strategies have different returns that vary by many orders of magnitude.
So ordinary people doing the stupid obvious thing (voting, collecting signatures, etc.) might easily have 1/1000th the impact per unit time of someone who just works an extra 5 hours a week and donates the money it to a carefully chose advocacy organization. If these rationals are > 0.1% of the population, they have greater impact. And convincing someone to become one of these anti-voting rationals ups their personal impact by 1000 as much as convincing someone to vote.
There’s an organization (sorry no cite, but maybe this story will dredge up the info from other people) which teaches people how to be politically effective.
There was a woman who wanted to get a local issue taken care of, but she couldn’t get any traction. She went to the organization, and they found a mayor(?) who was running uncontested, and told her how to run against him.
If she’d actually run, he’d have needed to do a lot more campaigning, even though he certainly would have won. So he went to her and said, “What do you want?”.
Her issue was taken care of, and she has continued to be politically active.
It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way—without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning. (Newcomb’s Problem and Regret of Rationality)
This topic seems to come up all the time on LW. Surely it’s clear that any heuristic is bound to certain circumstances? If I make up the rule that I only give you the money you want if you are a devoted irrationalist then ¬irrational. I don’t see any paradox here?
I’m astonished that this comment has been voted down. The comment speaks bluntly, saying that standard classical causal decision theory is “wrong”. Is the problem with standard classical causal decision theory really that bad? Yes!
Consider an election in which the members of faction A have accept decision theory and each individual in A takes a cold-blooded decision that it is not worth his while to vote. Meanwhile faction B is made up of ordinary dumb schmucks. They believe that they have a moral duty to vote, except that half of them conveniently forget. Of those that remember, half get distracted by something good on TV. That still leaves a hard core, only half of whom are put off by the rain.
The outcome of the election is that faction B wins with a turnout of one eighth of its members versus faction A which failed to get the vote out at all.
Now comes the “tricky” bit; sticking labels on the factions. Which do we label “rational” and which do we label “irrational”. If rationalists win, we had better stick the label “rational” on B and “irrational” on A. “There is a lot of rocking and rolling up here, this is not a telemetry problem.”
So far I just stated the paradox of voting, but there is a problem for Less Wrong and not just for voting. Ordinary people know fine well that you have to get off your arse and go and vote. If we on Less Wrong will not admit that there is a problem and simply repeat “voting is irrational” ordinary people will correctly conclude that we have disappeared up into our ivory tower where we can believe in stupid theories that don’t work in the real world.
There’s no need to invoke any kind of fancy “superrationality.” There’s just a conflict between individual rationality and group rationality.
As a leader or activist, it’s in my interest to believe and say things like “Yay voting!” because that helps me lead mobs of people and achieve the election results I prefer.
As an individual private citizen, it’s in my interest to stay home and donate to charity, because my vote has much less than a 1⁄100,000,000 chance of swinging an election: history shows that national voting preferences are drawn from a curve that is much more like a normal distribution than a uniform distribution, and the peak of the normal curve in any given election is going to be off by 1-5% of registered voters based on economic data, approval ratings, etc. In other words, a standard analytical rationalist should be able to predict that less than 1⁄100,000,000th of the curve of possible election turnouts fit under the exact 50-50 tie that you would need for your vote to matter in any straightforward instrumental sense. If you would take 5:4 odds in favor of either candidate, you shouldn’t be betting that the race will end in a tie even at odds of 100,000,000 to 1, and if you can’t figure out who to take 5:4 odds for by Election Day, then you haven’t read Green & Gerber’s paper.
More importantly, even if there were a statistical tie, it would be settled by recounts, fraud, and the appeals process. See, e.g., 2000, 1960, and 1876. Elections that are within a few thousand votes come down to a contest of political will, sly manipulation, and spin waged among professional political operatives and election monitors/officials. If you really want to invest a few hours of your time to swing an election, you shouldn’t bother voting: you should volunteer to monitor a polling station.
I have no idea how to even begin analyzing standard vs. Hofstaderian decision theory, but it doesn’t matter for the example of national elections. A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote. Possibly in some bizarre cyberpunk future that we may all yet live to see, Omega-like beings will create fanciful situations in which voting causes others to vote, but in the meantime mucking about with timeless decision theory to explain voting behavior is like applying relativistic mechanics to a high school track meet.
I think this is close to the mark, but not exactly correct: a truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, use the extra 2 hours* however she likes, and whenever the subject of voting came up in regular everyday conversation she’d urge others to get informed and vote (or just refrain from discouraging them, if she’s uncomfortable/unskilled with hypocrisy).
* Two hours for voting? Whoa. Do you live in a crowded city, or very far away from the nearest station?
[grin] I live(d) in Broward County, Florida. You may have heard of its stellar reputation for effective polling.
I agree that there is a conflict between individual rationality and group rationality, but what is the word “just” doing in there? Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
I wonder if there is a connection to the cost allocation problem in management accounting. In the electoral case, if faction A win by a good margin and each member is $1000 better off because of the policies their man enacts, should they allocate $500 dollars profit to each of the two hours they spent voting and think themselves handsomely rewarded for their efforts, or should they look at the good margin and say “Two hours wasted, I could have stayed home and we would still have won.” In the business case there is a fixed cost A for a machine and a marginal cost B per unit of production. So the cost model is A+Bv. Obviously the business wants the sales force to go out there and sell and get price P and volume V so that A+BV < PV. One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
I have just ordered Relevance Lost because I suspect there is a lot of history here, with people going round in circles trying to solve this problem. (Since I’ve ordered a second copy for £1 + £2.75 postage I’m not risking much money on this suspicion :-)
Right, but the conflict is for the manager alone to solve; the manager’s challenge is to create incentives that will encourage salespeople to further the company’s goals. The salespeople face no such challenge; their goal is (or should be) to do their job well with a minimum of time and effort.
With respect: no, it doesn’t. Everyone might wish that individual and group rationality would dovetail, but wishing doesn’t make it so. The whole of political economics—the study of governments, cartels, unions, mafias, and interest groups -- is an attempt to cope with the lack of consistency constraints. Governments exist not just because people are irrational, but also because rational people often choose to behave in their narrow self-interest.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period. Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
But real flesh-and-blood humans are never in a true PD situation. They are in something more like an iterated PD—it is never a one-shot. If I choose not to vote, my neighbor knows—she works as a clerk at the polling place. If I belong to a union, my shop steward will know whether I have voted, because my union has poll-watchers.
Of course; that’s right. Sometimes the fear of detection or the hope of establishing long-term cooperation will get you out of what otherwise appears to be a PD. Other times, it won’t—if you see an abandoned laptop at a scenic view pull-over on a recreational road trip, you’re pretty much dealing with a one-shot PD. If you return the laptop, it’s because you empathize with the owner or believe in karma, and not because you’re afraid that the laptop owner won’t return your laptop the next time around.
Still, it’s important not to believe that individual and collective rationality magically match up—that belief can lead to all kinds of honest but horribly tragic mistakes, like thinking that peasants will exert significant effort at farming when placed in a Trotsky-style commune.
I would upvote you thrice if I could. An overwhelming number of time-tested social dynamics, to say nothing of deliberately designed laws, can be seen to have arisen as anti-PD measures.
No. At least, not with the period. It depends who you are in the prison with and your respective abilities for prediction. (True PD does not imply participants are human.)
I agree on this as well as your general point.
I would guess that the downvotes were due to the ascription of “misguided humility”, not calling CDT wrong.
And this argument has what to do with my personal decision to vote?
My choice does not determine the choices of others who believe like me, unless I’m a lot more popular than I think I am.
After saying voting is irrational, the next step for someone who truly cares about political change is to go figure out what the maximal political change they can get for their limited resources is—what’s the most efficient way to translate time or dollars into change. I believe that various strategies have different returns that vary by many orders of magnitude.
So ordinary people doing the stupid obvious thing (voting, collecting signatures, etc.) might easily have 1/1000th the impact per unit time of someone who just works an extra 5 hours a week and donates the money it to a carefully chose advocacy organization. If these rationals are > 0.1% of the population, they have greater impact. And convincing someone to become one of these anti-voting rationals ups their personal impact by 1000 as much as convincing someone to vote.
There’s an organization (sorry no cite, but maybe this story will dredge up the info from other people) which teaches people how to be politically effective.
There was a woman who wanted to get a local issue taken care of, but she couldn’t get any traction. She went to the organization, and they found a mayor(?) who was running uncontested, and told her how to run against him.
If she’d actually run, he’d have needed to do a lot more campaigning, even though he certainly would have won. So he went to her and said, “What do you want?”.
Her issue was taken care of, and she has continued to be politically active.
This topic seems to come up all the time on LW. Surely it’s clear that any heuristic is bound to certain circumstances? If I make up the rule that I only give you the money you want if you are a devoted irrationalist then ¬irrational. I don’t see any paradox here?