I take it that you’re using the standard wrong classical causal decision theory (in which no one is responsible for the election outcome unless one side wins by a single vote, in which case millions of voters are all solely responsible for the entire election outcome) out of either misguided humility about the probability of an SIAI-originating decision theory being correct, or because you’re planning to publish this paper elsewhere and you don’t want to invoke Hofstadterian superrationality in place of the standard wrong decision theory?
I don’t think this should be downvoted (was at −2 when I wrote this). I generally downvote only those things that are either trolling or that will have the same effect because the discussion will spiral in horribly non-truthy directions. This comment seems to me like it is a legitimate question, with a legitimate answer, that may lead to productive plans for the future.
The question is legitimate because UDT/TDT/etc are concepts that are basically novel to LW but that we would hope could be developed to the point of practical utility if LW is going to be shine in some way other than being a popularizing mechanism for academic research.
The answer follows from the same thinking that explains why the question is legitimate.
With classical decision theory you’re basically just trying to figure out the costs and benefits of multiple options with some uncertainty mixed in. Then you pick the one that adds up to “the best thing”. The formal math basically exhorts you to simply perform numerical multi-pronged due diligence which most people don’t do for most things but that “decision theory” can encourage them to do in a structured way. It gives them a research keyword to get lots of advice about methodology and also lets them know that being meticulous and planful about important choices is honestly likely to lead to better outcomes in some circumstance (where the value of information can even be estimated, no less).
Just getting that slight improvement in process might end up making a huge difference because the sanity waterline really is that low.
But at the present time, I don’t have any clue what what practical steps someone would perform if they were going to implement either TDT or UDT for practical personal or philanthropic benefit. Look up dollar values? Try and find probabilistic estimates of impact? But that stuff is simply the traditional way to go about such things and doesn’t require one to invoke a new decision theory, and so the concept doesn’t pay rent in utility if deploying it just means “doing the same thing other theories with better known names already endorsed”.
So to add in the new aspects of our novel decision theory we’re supposed to… um… look up the source code of voters? And then somehow emulate it? If the state of the world is uncertain do we need to run different emulations and average them to factor out the uncertainty? And do we completely ignore the other data that it seemed like basic prudence to gather, or do we integrate it with a formula somehow? On a spreadsheet? Using some python script we hack up? Are we going to have to hire a programmer with monte-carlo simulation experience to figure this out? Do we need to rent some super-computer time?
Its just not clear what “mental algorithm” should be performed if we’re going to gainfully apply a novel decision theory of the sort we’ve been kicking around in LW, completely leaving aside having an algorithm we can actually perform on paper the way we can calculate stock valuations using simple accounting rules to find the starting point and playing with estimates of the growth of total cash versus total stock.
And since “perfect is the enemy of good enough”, and “good enough is the enemy of at all”, and optimal philanthropy efforts don’t seem to have been paying attention to the arena of politics basically at all (probably due to mindkiller processes that people in this community can appreciate) Carl’s approach seems to me like the best way to get at low hanging fruit.
In the meantime, I think it would be really cool to find practical applications of UDT/TDT to real world situations (like maybe, answering the question “Does this dress make me look fat?” or figuring out when and how to fire someone you like but who is doing a terrible job) that can be worked out as “examples for the text book” showing how to take a given “word problem” through a data gathering stage and into an algorithmic “plug and chug” stage.
Until TDT/UDT has been spelled out in such brutal simplicity that I can read about it, do some practice until I acquire the theory grounded ability to notice when I’m doing well or poorly and judge myself to be doing it well, and then teach a patient 12 year old with an IQ of 120 to implement it on paper for a test case, it seems unrealistic to expect the theory to be applied by Carl to philanthropic political donations.
That was a beautiful, useful, and tactful exposition of the basic reasons I refrained from reversing the downvotes. Along with not voting, I didn’t say “it sounds a bit unfair and self-aggrandizing to expect Carl to use a decision theory you’ve invented, but have not actually explicitly described;” because that’s minimally useful, except in contrast to your essay.
Ah, I think I see. You subscribe to an author oriented theory of moderation with strong emphasis on moral reward/punishment… and you don’t have a lot of charity here...
I generally just try to promote interesting content. I downvote into negative territory only when I expect content to be a net negative in terms of epistemic hygiene as when people go into conflict mode and stop sympathizing with their conversational partners. Trolling, spamming, and sometimes even just content “above out sanity waterline” deserve negative votes in my book. This seems like none of those things.
Honest mistakes growing out of inferential distance (IE assuming that because Eliezer understands his own theory and wrote about it in public that now everyone can deploy it), that are related to deeply worthwhile content given the big picture (IE that TDT is an early draft for an optimization algorithm that might be able to recursively and safely self improve around morally fraught issues in a self-correcting way) seems like the kind of thing that need more attention rather than less. So I vote up in order to increase the attention it receives instead of down to express moral displeasure.
Every so often I see something that seems to have been downvoted where I can plausibly imagine that the content was interesting, but there seem to be inferential distance problems between the voters and the commenter and I see if I can bridge the distance. Seeing as the comment is down to −3 now, I seem to be failing in this case. But maybe this will bring it up? :-)
Seeing as the comment is down to −3 now, I seem to be failing in this case.
You may have been more successful if you avoided telling people how they should vote, particularly with wording implying prohibition. Stating your own decision, reasoning and preference without presumption in general makes me more inclined to accommodate.
I may have voted up the comment in question if Eliezer had included even a single sentence description on what difference using TDT instead of Lame DT would have made to the question of ‘politics as charity’. That would have been a real contribution to the thread. As is it is more of a ‘complete the pattern’ then rant response that does not indicate that Eliezer even considered what TDT would mean in the context.
(It would change the example in the introduction and change the framing in some of the remainder. Politics would still be charity, even though what that means is changed somewhat.)
I downvoted because in any piece of public writing “I am right” should never be offered as a postulate, and my opinion of those who do so is best left unsaid.
The big problem is that many ordinary people already outperform the standard wrong classical causal decision theory. They get the vote out, they get their man into power, they get the preferences, restraints, and rents they are seeking and they laugh all the way to the bank.
There is a saying that a paradox is to a logician as the smell of burning insulation is to the electrical engineer. Many paradoxes are not really that serious, but the voting paradox strikes me analogous to seeing flames.
The big problem is that many ordinary people already outperform the standard wrong classical causal decision theory. They get the vote out, they get their man into power, they get the preferences, restraints, and rents they are seeking and they laugh all the way to the bank.
Elections have losers as well as winners. Do you think people who vote for losers have a different decision theory to the people who vote for winners?
I remember a scene from a novel by Gene Wolfe in which a bunch of tribesmen find themselves on a battlefield against a foe armed with energy weapons. The tribesmen all engage in superstitious ritual meant to provide personal protection. Some of them get blown to bits and some don’t. The ones who survive are going to end up thinking that their ritual works.
By focusing only on the victors in the ritual of democracy, when you judge the rationality of the slackers who don’t vote at all, you are creating a similar illusion. Supporters of the loser do everything that supporters of the winner do. They go house to house, they hold rallies, they donate money, they send letters to the editor. They make that big investment of time, hope, and energy, because they believe in democracy, and they still don’t get any of what they want.
Your rhetorical question contains a noun-phrase “people who vote for losers”. This seems to refer to the faction that misses out on the spoils of the electoral system because too many members of the faction subscribe to the theory the voting doesn’t matter, resulting in the faction losing because they couldn’t get their vote out. So the words “people who vote” are being used to refer to people who don’t vote.
This reminds me that I ought to stop reading LessWrong and get back to work on OuterCircle
I’m astonished that this comment has been voted down. The comment speaks bluntly, saying that standard classical causal decision theory is “wrong”. Is the problem with standard classical causal decision theory really that bad? Yes!
Consider an election in which the members of faction A have accept decision theory and each individual in A takes a cold-blooded decision that it is not worth his while to vote. Meanwhile faction B is made up of ordinary dumb schmucks. They believe that they have a moral duty to vote, except that half of them conveniently forget. Of those that remember, half get distracted by something good on TV. That still leaves a hard core, only half of whom are put off by the rain.
The outcome of the election is that faction B wins with a turnout of one eighth of its members versus faction A which failed to get the vote out at all.
Now comes the “tricky” bit; sticking labels on the factions. Which do we label “rational” and which do we label “irrational”. If rationalists win, we had better stick the label “rational” on B and “irrational” on A. “There is a lot of rocking and rolling up here, this is not a telemetry problem.”
So far I just stated the paradox of voting, but there is a problem for Less Wrong and not just for voting. Ordinary people know fine well that you have to get off your arse and go and vote. If we on Less Wrong will not admit that there is a problem and simply repeat “voting is irrational” ordinary people will correctly conclude that we have disappeared up into our ivory tower where we can believe in stupid theories that don’t work in the real world.
There’s no need to invoke any kind of fancy “superrationality.” There’s just a conflict between individual rationality and group rationality.
As a leader or activist, it’s in my interest to believe and say things like “Yay voting!” because that helps me lead mobs of people and achieve the election results I prefer.
As an individual private citizen, it’s in my interest to stay home and donate to charity, because my vote has much less than a 1⁄100,000,000 chance of swinging an election: history shows that national voting preferences are drawn from a curve that is much more like a normal distribution than a uniform distribution, and the peak of the normal curve in any given election is going to be off by 1-5% of registered voters based on economic data, approval ratings, etc. In other words, a standard analytical rationalist should be able to predict that less than 1⁄100,000,000th of the curve of possible election turnouts fit under the exact 50-50 tie that you would need for your vote to matter in any straightforward instrumental sense. If you would take 5:4 odds in favor of either candidate, you shouldn’t be betting that the race will end in a tie even at odds of 100,000,000 to 1, and if you can’t figure out who to take 5:4 odds for by Election Day, then you haven’t read Green & Gerber’s paper.
More importantly, even if there were a statistical tie, it would be settled by recounts, fraud, and the appeals process. See, e.g., 2000, 1960, and 1876. Elections that are within a few thousand votes come down to a contest of political will, sly manipulation, and spin waged among professional political operatives and election monitors/officials. If you really want to invest a few hours of your time to swing an election, you shouldn’t bother voting: you should volunteer to monitor a polling station.
I have no idea how to even begin analyzing standard vs. Hofstaderian decision theory, but it doesn’t matter for the example of national elections. A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote. Possibly in some bizarre cyberpunk future that we may all yet live to see, Omega-like beings will create fanciful situations in which voting causes others to vote, but in the meantime mucking about with timeless decision theory to explain voting behavior is like applying relativistic mechanics to a high school track meet.
A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote.
I think this is close to the mark, but not exactly correct: a truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, use the extra 2 hours* however she likes, and whenever the subject of voting came up in regular everyday conversation she’d urge others to get informed and vote (or just refrain from discouraging them, if she’s uncomfortable/unskilled with hypocrisy).
* Two hours for voting? Whoa. Do you live in a crowded city, or very far away from the nearest station?
I agree that there is a conflict between individual rationality and group rationality, but what is the word “just” doing in there? Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
I wonder if there is a connection to the cost allocation problem in management accounting. In the electoral case, if faction A win by a good margin and each member is $1000 better off because of the policies their man enacts, should they allocate $500 dollars profit to each of the two hours they spent voting and think themselves handsomely rewarded for their efforts, or should they look at the good margin and say “Two hours wasted, I could have stayed home and we would still have won.” In the business case there is a fixed cost A for a machine and a marginal cost B per unit of production. So the cost model is A+Bv. Obviously the business wants the sales force to go out there and sell and get price P and volume V so that A+BV < PV. One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
I have just ordered Relevance Lost because I suspect there is a lot of history here, with people going round in circles trying to solve this problem. (Since I’ve ordered a second copy for £1 + £2.75 postage I’m not risking much money on this suspicion :-)
One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
Right, but the conflict is for the manager alone to solve; the manager’s challenge is to create incentives that will encourage salespeople to further the company’s goals. The salespeople face no such challenge; their goal is (or should be) to do their job well with a minimum of time and effort.
Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
With respect: no, it doesn’t. Everyone might wish that individual and group rationality would dovetail, but wishing doesn’t make it so. The whole of political economics—the study of governments, cartels, unions, mafias, and interest groups -- is an attempt to cope with the lack of consistency constraints. Governments exist not just because people are irrational, but also because rational people often choose to behave in their narrow self-interest.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period. Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period.
But real flesh-and-blood humans are never in a true PD situation. They are in something more like an iterated PD—it is never a one-shot. If I choose not to vote, my neighbor knows—she works as a clerk at the polling place. If I belong to a union, my shop steward will know whether I have voted, because my union has poll-watchers.
Of course; that’s right. Sometimes the fear of detection or the hope of establishing long-term cooperation will get you out of what otherwise appears to be a PD. Other times, it won’t—if you see an abandoned laptop at a scenic view pull-over on a recreational road trip, you’re pretty much dealing with a one-shot PD. If you return the laptop, it’s because you empathize with the owner or believe in karma, and not because you’re afraid that the laptop owner won’t return your laptop the next time around.
Still, it’s important not to believe that individual and collective rationality magically match up—that belief can lead to all kinds of honest but horribly tragic mistakes, like thinking that peasants will exert significant effort at farming when placed in a Trotsky-style commune.
I would upvote you thrice if I could. An overwhelming number of time-tested social dynamics, to say nothing of deliberately designed laws, can be seen to have arisen as anti-PD measures.
It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period.
No. At least, not with the period. It depends who you are in the prison with and your respective abilities for prediction. (True PD does not imply participants are human.)
Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
And this argument has what to do with my personal decision to vote?
My choice does not determine the choices of others who believe like me, unless I’m a lot more popular than I think I am.
After saying voting is irrational, the next step for someone who truly cares about political change is to go figure out what the maximal political change they can get for their limited resources is—what’s the most efficient way to translate time or dollars into change. I believe that various strategies have different returns that vary by many orders of magnitude.
So ordinary people doing the stupid obvious thing (voting, collecting signatures, etc.) might easily have 1/1000th the impact per unit time of someone who just works an extra 5 hours a week and donates the money it to a carefully chose advocacy organization. If these rationals are > 0.1% of the population, they have greater impact. And convincing someone to become one of these anti-voting rationals ups their personal impact by 1000 as much as convincing someone to vote.
There’s an organization (sorry no cite, but maybe this story will dredge up the info from other people) which teaches people how to be politically effective.
There was a woman who wanted to get a local issue taken care of, but she couldn’t get any traction. She went to the organization, and they found a mayor(?) who was running uncontested, and told her how to run against him.
If she’d actually run, he’d have needed to do a lot more campaigning, even though he certainly would have won. So he went to her and said, “What do you want?”.
Her issue was taken care of, and she has continued to be politically active.
It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way—without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning. (Newcomb’s Problem and Regret of Rationality)
This topic seems to come up all the time on LW. Surely it’s clear that any heuristic is bound to certain circumstances? If I make up the rule that I only give you the money you want if you are a devoted irrationalist then ¬irrational. I don’t see any paradox here?
I take it that you’re using the standard wrong classical causal decision theory (in which no one is responsible for the election outcome unless one side wins by a single vote, in which case millions of voters are all solely responsible for the entire election outcome) out of either misguided humility about the probability of an SIAI-originating decision theory being correct, or because you’re planning to publish this paper elsewhere and you don’t want to invoke Hofstadterian superrationality in place of the standard wrong decision theory?
I don’t think this should be downvoted (was at −2 when I wrote this). I generally downvote only those things that are either trolling or that will have the same effect because the discussion will spiral in horribly non-truthy directions. This comment seems to me like it is a legitimate question, with a legitimate answer, that may lead to productive plans for the future.
The question is legitimate because UDT/TDT/etc are concepts that are basically novel to LW but that we would hope could be developed to the point of practical utility if LW is going to be shine in some way other than being a popularizing mechanism for academic research.
The answer follows from the same thinking that explains why the question is legitimate.
With classical decision theory you’re basically just trying to figure out the costs and benefits of multiple options with some uncertainty mixed in. Then you pick the one that adds up to “the best thing”. The formal math basically exhorts you to simply perform numerical multi-pronged due diligence which most people don’t do for most things but that “decision theory” can encourage them to do in a structured way. It gives them a research keyword to get lots of advice about methodology and also lets them know that being meticulous and planful about important choices is honestly likely to lead to better outcomes in some circumstance (where the value of information can even be estimated, no less).
Just getting that slight improvement in process might end up making a huge difference because the sanity waterline really is that low.
But at the present time, I don’t have any clue what what practical steps someone would perform if they were going to implement either TDT or UDT for practical personal or philanthropic benefit. Look up dollar values? Try and find probabilistic estimates of impact? But that stuff is simply the traditional way to go about such things and doesn’t require one to invoke a new decision theory, and so the concept doesn’t pay rent in utility if deploying it just means “doing the same thing other theories with better known names already endorsed”.
So to add in the new aspects of our novel decision theory we’re supposed to… um… look up the source code of voters? And then somehow emulate it? If the state of the world is uncertain do we need to run different emulations and average them to factor out the uncertainty? And do we completely ignore the other data that it seemed like basic prudence to gather, or do we integrate it with a formula somehow? On a spreadsheet? Using some python script we hack up? Are we going to have to hire a programmer with monte-carlo simulation experience to figure this out? Do we need to rent some super-computer time?
Its just not clear what “mental algorithm” should be performed if we’re going to gainfully apply a novel decision theory of the sort we’ve been kicking around in LW, completely leaving aside having an algorithm we can actually perform on paper the way we can calculate stock valuations using simple accounting rules to find the starting point and playing with estimates of the growth of total cash versus total stock.
And since “perfect is the enemy of good enough”, and “good enough is the enemy of at all”, and optimal philanthropy efforts don’t seem to have been paying attention to the arena of politics basically at all (probably due to mindkiller processes that people in this community can appreciate) Carl’s approach seems to me like the best way to get at low hanging fruit.
In the meantime, I think it would be really cool to find practical applications of UDT/TDT to real world situations (like maybe, answering the question “Does this dress make me look fat?” or figuring out when and how to fire someone you like but who is doing a terrible job) that can be worked out as “examples for the text book” showing how to take a given “word problem” through a data gathering stage and into an algorithmic “plug and chug” stage.
Until TDT/UDT has been spelled out in such brutal simplicity that I can read about it, do some practice until I acquire the theory grounded ability to notice when I’m doing well or poorly and judge myself to be doing it well, and then teach a patient 12 year old with an IQ of 120 to implement it on paper for a test case, it seems unrealistic to expect the theory to be applied by Carl to philanthropic political donations.
That was a beautiful, useful, and tactful exposition of the basic reasons I refrained from reversing the downvotes. Along with not voting, I didn’t say “it sounds a bit unfair and self-aggrandizing to expect Carl to use a decision theory you’ve invented, but have not actually explicitly described;” because that’s minimally useful, except in contrast to your essay.
Ah, I think I see. You subscribe to an author oriented theory of moderation with strong emphasis on moral reward/punishment… and you don’t have a lot of charity here...
I generally just try to promote interesting content. I downvote into negative territory only when I expect content to be a net negative in terms of epistemic hygiene as when people go into conflict mode and stop sympathizing with their conversational partners. Trolling, spamming, and sometimes even just content “above out sanity waterline” deserve negative votes in my book. This seems like none of those things.
Honest mistakes growing out of inferential distance (IE assuming that because Eliezer understands his own theory and wrote about it in public that now everyone can deploy it), that are related to deeply worthwhile content given the big picture (IE that TDT is an early draft for an optimization algorithm that might be able to recursively and safely self improve around morally fraught issues in a self-correcting way) seems like the kind of thing that need more attention rather than less. So I vote up in order to increase the attention it receives instead of down to express moral displeasure.
Every so often I see something that seems to have been downvoted where I can plausibly imagine that the content was interesting, but there seem to be inferential distance problems between the voters and the commenter and I see if I can bridge the distance. Seeing as the comment is down to −3 now, I seem to be failing in this case. But maybe this will bring it up? :-)
You may have been more successful if you avoided telling people how they should vote, particularly with wording implying prohibition. Stating your own decision, reasoning and preference without presumption in general makes me more inclined to accommodate.
I may have voted up the comment in question if Eliezer had included even a single sentence description on what difference using TDT instead of Lame DT would have made to the question of ‘politics as charity’. That would have been a real contribution to the thread. As is it is more of a ‘complete the pattern’ then rant response that does not indicate that Eliezer even considered what TDT would mean in the context.
(It would change the example in the introduction and change the framing in some of the remainder. Politics would still be charity, even though what that means is changed somewhat.)
I downvoted because in any piece of public writing “I am right” should never be offered as a postulate, and my opinion of those who do so is best left unsaid.
The big problem is that many ordinary people already outperform the standard wrong classical causal decision theory. They get the vote out, they get their man into power, they get the preferences, restraints, and rents they are seeking and they laugh all the way to the bank.
There is a saying that a paradox is to a logician as the smell of burning insulation is to the electrical engineer. Many paradoxes are not really that serious, but the voting paradox strikes me analogous to seeing flames.
Elections have losers as well as winners. Do you think people who vote for losers have a different decision theory to the people who vote for winners?
I remember a scene from a novel by Gene Wolfe in which a bunch of tribesmen find themselves on a battlefield against a foe armed with energy weapons. The tribesmen all engage in superstitious ritual meant to provide personal protection. Some of them get blown to bits and some don’t. The ones who survive are going to end up thinking that their ritual works.
By focusing only on the victors in the ritual of democracy, when you judge the rationality of the slackers who don’t vote at all, you are creating a similar illusion. Supporters of the loser do everything that supporters of the winner do. They go house to house, they hold rallies, they donate money, they send letters to the editor. They make that big investment of time, hope, and energy, because they believe in democracy, and they still don’t get any of what they want.
Your rhetorical question contains a noun-phrase “people who vote for losers”. This seems to refer to the faction that misses out on the spoils of the electoral system because too many members of the faction subscribe to the theory the voting doesn’t matter, resulting in the faction losing because they couldn’t get their vote out. So the words “people who vote” are being used to refer to people who don’t vote.
This reminds me that I ought to stop reading LessWrong and get back to work on OuterCircle
I’m astonished that this comment has been voted down. The comment speaks bluntly, saying that standard classical causal decision theory is “wrong”. Is the problem with standard classical causal decision theory really that bad? Yes!
Consider an election in which the members of faction A have accept decision theory and each individual in A takes a cold-blooded decision that it is not worth his while to vote. Meanwhile faction B is made up of ordinary dumb schmucks. They believe that they have a moral duty to vote, except that half of them conveniently forget. Of those that remember, half get distracted by something good on TV. That still leaves a hard core, only half of whom are put off by the rain.
The outcome of the election is that faction B wins with a turnout of one eighth of its members versus faction A which failed to get the vote out at all.
Now comes the “tricky” bit; sticking labels on the factions. Which do we label “rational” and which do we label “irrational”. If rationalists win, we had better stick the label “rational” on B and “irrational” on A. “There is a lot of rocking and rolling up here, this is not a telemetry problem.”
So far I just stated the paradox of voting, but there is a problem for Less Wrong and not just for voting. Ordinary people know fine well that you have to get off your arse and go and vote. If we on Less Wrong will not admit that there is a problem and simply repeat “voting is irrational” ordinary people will correctly conclude that we have disappeared up into our ivory tower where we can believe in stupid theories that don’t work in the real world.
There’s no need to invoke any kind of fancy “superrationality.” There’s just a conflict between individual rationality and group rationality.
As a leader or activist, it’s in my interest to believe and say things like “Yay voting!” because that helps me lead mobs of people and achieve the election results I prefer.
As an individual private citizen, it’s in my interest to stay home and donate to charity, because my vote has much less than a 1⁄100,000,000 chance of swinging an election: history shows that national voting preferences are drawn from a curve that is much more like a normal distribution than a uniform distribution, and the peak of the normal curve in any given election is going to be off by 1-5% of registered voters based on economic data, approval ratings, etc. In other words, a standard analytical rationalist should be able to predict that less than 1⁄100,000,000th of the curve of possible election turnouts fit under the exact 50-50 tie that you would need for your vote to matter in any straightforward instrumental sense. If you would take 5:4 odds in favor of either candidate, you shouldn’t be betting that the race will end in a tie even at odds of 100,000,000 to 1, and if you can’t figure out who to take 5:4 odds for by Election Day, then you haven’t read Green & Gerber’s paper.
More importantly, even if there were a statistical tie, it would be settled by recounts, fraud, and the appeals process. See, e.g., 2000, 1960, and 1876. Elections that are within a few thousand votes come down to a contest of political will, sly manipulation, and spin waged among professional political operatives and election monitors/officials. If you really want to invest a few hours of your time to swing an election, you shouldn’t bother voting: you should volunteer to monitor a polling station.
I have no idea how to even begin analyzing standard vs. Hofstaderian decision theory, but it doesn’t matter for the example of national elections. A truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, and use the extra 2 hours to make more phone calls urging others to vote. Possibly in some bizarre cyberpunk future that we may all yet live to see, Omega-like beings will create fanciful situations in which voting causes others to vote, but in the meantime mucking about with timeless decision theory to explain voting behavior is like applying relativistic mechanics to a high school track meet.
I think this is close to the mark, but not exactly correct: a truly rational electoral activist would not vote, find a way to convincingly lie and claim that she voted, use the extra 2 hours* however she likes, and whenever the subject of voting came up in regular everyday conversation she’d urge others to get informed and vote (or just refrain from discouraging them, if she’s uncomfortable/unskilled with hypocrisy).
* Two hours for voting? Whoa. Do you live in a crowded city, or very far away from the nearest station?
[grin] I live(d) in Broward County, Florida. You may have heard of its stellar reputation for effective polling.
I agree that there is a conflict between individual rationality and group rationality, but what is the word “just” doing in there? Individuals belong to groups and the group’s loses are shared out among the group’s members. This imposes a consistency constraint on the relationship between individual rationality and group rationality.
I wonder if there is a connection to the cost allocation problem in management accounting. In the electoral case, if faction A win by a good margin and each member is $1000 better off because of the policies their man enacts, should they allocate $500 dollars profit to each of the two hours they spent voting and think themselves handsomely rewarded for their efforts, or should they look at the good margin and say “Two hours wasted, I could have stayed home and we would still have won.” In the business case there is a fixed cost A for a machine and a marginal cost B per unit of production. So the cost model is A+Bv. Obviously the business wants the sales force to go out there and sell and get price P and volume V so that A+BV < PV. One ends up with a conflict between individual sales, for which any price above B is better than no sale, and over all sales, which need a margin to cover the fixed costs, the margin being uncertain as the total sales are uncertain.
I have just ordered Relevance Lost because I suspect there is a lot of history here, with people going round in circles trying to solve this problem. (Since I’ve ordered a second copy for £1 + £2.75 postage I’m not risking much money on this suspicion :-)
Right, but the conflict is for the manager alone to solve; the manager’s challenge is to create incentives that will encourage salespeople to further the company’s goals. The salespeople face no such challenge; their goal is (or should be) to do their job well with a minimum of time and effort.
With respect: no, it doesn’t. Everyone might wish that individual and group rationality would dovetail, but wishing doesn’t make it so. The whole of political economics—the study of governments, cartels, unions, mafias, and interest groups -- is an attempt to cope with the lack of consistency constraints. Governments exist not just because people are irrational, but also because rational people often choose to behave in their narrow self-interest.
It is an interesting question whether defecting on the Prisoner’s Dilemma is truly rational when one is writing code for an AI. It is not an interesting question when dealing with flesh-and-blood humans: in a true Prisoner’s Dilemma, you defect, period. Thus, it should be our goal as designers of human social institutions to minimize and contain true Prisoner’s Dilemmas.
But real flesh-and-blood humans are never in a true PD situation. They are in something more like an iterated PD—it is never a one-shot. If I choose not to vote, my neighbor knows—she works as a clerk at the polling place. If I belong to a union, my shop steward will know whether I have voted, because my union has poll-watchers.
Of course; that’s right. Sometimes the fear of detection or the hope of establishing long-term cooperation will get you out of what otherwise appears to be a PD. Other times, it won’t—if you see an abandoned laptop at a scenic view pull-over on a recreational road trip, you’re pretty much dealing with a one-shot PD. If you return the laptop, it’s because you empathize with the owner or believe in karma, and not because you’re afraid that the laptop owner won’t return your laptop the next time around.
Still, it’s important not to believe that individual and collective rationality magically match up—that belief can lead to all kinds of honest but horribly tragic mistakes, like thinking that peasants will exert significant effort at farming when placed in a Trotsky-style commune.
I would upvote you thrice if I could. An overwhelming number of time-tested social dynamics, to say nothing of deliberately designed laws, can be seen to have arisen as anti-PD measures.
No. At least, not with the period. It depends who you are in the prison with and your respective abilities for prediction. (True PD does not imply participants are human.)
I agree on this as well as your general point.
I would guess that the downvotes were due to the ascription of “misguided humility”, not calling CDT wrong.
And this argument has what to do with my personal decision to vote?
My choice does not determine the choices of others who believe like me, unless I’m a lot more popular than I think I am.
After saying voting is irrational, the next step for someone who truly cares about political change is to go figure out what the maximal political change they can get for their limited resources is—what’s the most efficient way to translate time or dollars into change. I believe that various strategies have different returns that vary by many orders of magnitude.
So ordinary people doing the stupid obvious thing (voting, collecting signatures, etc.) might easily have 1/1000th the impact per unit time of someone who just works an extra 5 hours a week and donates the money it to a carefully chose advocacy organization. If these rationals are > 0.1% of the population, they have greater impact. And convincing someone to become one of these anti-voting rationals ups their personal impact by 1000 as much as convincing someone to vote.
There’s an organization (sorry no cite, but maybe this story will dredge up the info from other people) which teaches people how to be politically effective.
There was a woman who wanted to get a local issue taken care of, but she couldn’t get any traction. She went to the organization, and they found a mayor(?) who was running uncontested, and told her how to run against him.
If she’d actually run, he’d have needed to do a lot more campaigning, even though he certainly would have won. So he went to her and said, “What do you want?”.
Her issue was taken care of, and she has continued to be politically active.
This topic seems to come up all the time on LW. Surely it’s clear that any heuristic is bound to certain circumstances? If I make up the rule that I only give you the money you want if you are a devoted irrationalist then ¬irrational. I don’t see any paradox here?