Significant shift in favor of voting in (presidential) elections being worthwhile.
Previously I figured that the chance of your vote mattering — in the consequentialist sense of actually leading to a different candidate being elected — is so incredibly small that voting isn’t something that is actually worthwhile. With the US presidential election coming up I decided to revisit that belief.
I googled and came across What Are the Chances Your Vote Matters? by Andrew Gelman. I didn’t read it too carefully but I see that he estimates the chances of your vote mattering ranging from one in a million to one in a trillion. Those odds may seem low, but he also makes the following argument:
If your vote is decisive, it will make a difference for over 300 million people. If you think your preferred candidate could bring the equivalent of a $100 improvement in the quality of life to the average American—not an implausible number, given the size of the federal budget and the impact of decisions in foreign policy, health, the courts, and other areas—you’re now buying a $30 billion lottery ticket. With this payoff, a 1 in 10 million chance of being decisive isn’t bad odds.
$100/person seems incredibly low, but even at that estimate it’s enough for voting to have a pretty high expected value.
Assuming his estimates of whether or not your vote mattering are in the right ballpark. But I figure that they are. I recall seeing Gelman come up in the rationality community various times, including in the sidebar of Overcoming Bias. That’s enough evidence for me to find him highly trustworthy.
In retrospect, I feel silly for having previously thought that voting wasn’t worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?
In retrospect, I feel silly for having previously thought that voting wasn’t worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?
Moderate shift towards distrusting my own reasoning ability.
I feel like this is a pretty big thing for me to have overlooked. And that my overlooking it points towards me generally being vulnerable to overlooking similarly important things in the future, in which case I can’t expect myself to reason super well about things.
Instead of thinking of it as a global variable, I tend to think of my reasoning ability as ‘good but narrow.’ Which is to say can arrive at good local results but prone to missing better optimums even if nearby if along dimensions that that problem domain doesn’t highlight. The higher the dimensionality of the problem, the more I view my results as highly provisional and more a function of time:result quality/completeness
I like that way of thinking about it. The ability to notice those other dimensions seems like a hugely important skill though. It reminds me of this excerpt from HPMOR:
A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn’t stop that.
Once you forgot to be scared of how impossible the problem was supposed to be, it wasn’t even difficult...
Significant shift in favor of voting in (presidential) elections being worthwhile.
Previously I figured that the chance of your vote mattering — in the consequentialist sense of actually leading to a different candidate being elected — is so incredibly small that voting isn’t something that is actually worthwhile. With the US presidential election coming up I decided to revisit that belief.
I googled and came across What Are the Chances Your Vote Matters? by Andrew Gelman. I didn’t read it too carefully but I see that he estimates the chances of your vote mattering ranging from one in a million to one in a trillion. Those odds may seem low, but he also makes the following argument:
$100/person seems incredibly low, but even at that estimate it’s enough for voting to have a pretty high expected value.
Assuming his estimates of whether or not your vote mattering are in the right ballpark. But I figure that they are. I recall seeing Gelman come up in the rationality community various times, including in the sidebar of Overcoming Bias. That’s enough evidence for me to find him highly trustworthy.
In retrospect, I feel silly for having previously thought that voting wasn’t worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?
Moderate shift towards distrusting my own reasoning ability.
I feel like this is a pretty big thing for me to have overlooked. And that my overlooking it points towards me generally being vulnerable to overlooking similarly important things in the future, in which case I can’t expect myself to reason super well about things.
Instead of thinking of it as a global variable, I tend to think of my reasoning ability as ‘good but narrow.’ Which is to say can arrive at good local results but prone to missing better optimums even if nearby if along dimensions that that problem domain doesn’t highlight. The higher the dimensionality of the problem, the more I view my results as highly provisional and more a function of time:result quality/completeness
I like that way of thinking about it. The ability to notice those other dimensions seems like a hugely important skill though. It reminds me of this excerpt from HPMOR: