The original poll (and these variants) mostly deal with a combination of issues related to coordination and altruism, but I think a variant that reframes things entirely in terms of a coordination and counterparty-modeling problem (and removes the death element) is also informative.
Suppose you’re playing a game with N other people and everyone has to choose a red or blue pill:
If a majority [alt: a supermajority, say >90%, to make it harder] choose blue, everyone gets $100
If everyone chooses red, everyone gets $100
If a majority (but not all) choose red:
reds get $90
blues get $0
Of course, this is a different game than the original poll, but it has some of the same properties: red is the safe choice, in that if you choose it for yourself, you get most of the theoretical maximum payout, and you don’t have to worry or think about what anyone else might do.
OTOH, if you’re playing with a large enough group of N random earthlings, it is highly likely that someone is going to choose blue, so you won’t get the maximum possible payout by choosing red. If you’re in a setup where you’re confident that most people will choose red regardless of what you do though, choosing red is still the best you can do—getting the full $100 may be simply out of reach for certain parameters of this game.
OTOOH, if you’re playing with a bunch of friends or rationalists and / or you can discuss beforehand, you can all agree to choose blue, and likely there will be enough trust and sanity between all of you that everyone will get the $100, even if there are a few random troublemakers / trolls / risk-averse people who choose red.
For a given population, payout configuration, and majority threshold, under what circumstances should you choose red vs. blue? This is mainly a question of how well you can model the other players (and your risk tolerance), including how well you can model them modelling you (and them modelling you modelling them, etc.), rather than a question about game theory or altruism. If you can discuss as a group beforehand, the modelling problem will generally become much easier, unless you’re in very unfavorable conditions (lots of trolls / low trust situation, etc.)
Separately, it would be nice to live in a world where, for most parameter settings of this game (value of N, population the players are drawn from, specific payoff values / configuration, threshold of blue-coordination required, level of prior communication allowed, etc.), most people will choose blue in most circumstances, with little or no prior coordination.
Which leads to the question of what general lessons and ideas about game theory, decision theory, and coordination that we can teach people and spread widely, in order to enable blue majorities to form even under difficult circumstances. (I’m not sure exactly what these lessons would look like, but my best guess is that most of them are currently found mostly in relatively obscure web fiction.)
I think a lot of the controversy / confusion around the original twitter poll was because many people were getting these points mixed up and not distinguishing between how people would answer from how (they thought) people should answer, based on their own understanding of game theory or decision theory or altruism or whatever.
The original poll (and these variants) mostly deal with a combination of issues related to coordination and altruism, but I think a variant that reframes things entirely in terms of a coordination and counterparty-modeling problem (and removes the death element) is also informative.
Suppose you’re playing a game with N other people and everyone has to choose a red or blue pill:
If a majority [alt: a supermajority, say >90%, to make it harder] choose blue, everyone gets $100
If everyone chooses red, everyone gets $100
If a majority (but not all) choose red:
reds get $90
blues get $0
Of course, this is a different game than the original poll, but it has some of the same properties: red is the safe choice, in that if you choose it for yourself, you get most of the theoretical maximum payout, and you don’t have to worry or think about what anyone else might do.
OTOH, if you’re playing with a large enough group of N random earthlings, it is highly likely that someone is going to choose blue, so you won’t get the maximum possible payout by choosing red. If you’re in a setup where you’re confident that most people will choose red regardless of what you do though, choosing red is still the best you can do—getting the full $100 may be simply out of reach for certain parameters of this game.
OTOOH, if you’re playing with a bunch of friends or rationalists and / or you can discuss beforehand, you can all agree to choose blue, and likely there will be enough trust and sanity between all of you that everyone will get the $100, even if there are a few random troublemakers / trolls / risk-averse people who choose red.
For a given population, payout configuration, and majority threshold, under what circumstances should you choose red vs. blue? This is mainly a question of how well you can model the other players (and your risk tolerance), including how well you can model them modelling you (and them modelling you modelling them, etc.), rather than a question about game theory or altruism. If you can discuss as a group beforehand, the modelling problem will generally become much easier, unless you’re in very unfavorable conditions (lots of trolls / low trust situation, etc.)
Separately, it would be nice to live in a world where, for most parameter settings of this game (value of N, population the players are drawn from, specific payoff values / configuration, threshold of blue-coordination required, level of prior communication allowed, etc.), most people will choose blue in most circumstances, with little or no prior coordination.
Which leads to the question of what general lessons and ideas about game theory, decision theory, and coordination that we can teach people and spread widely, in order to enable blue majorities to form even under difficult circumstances. (I’m not sure exactly what these lessons would look like, but my best guess is that most of them are currently found mostly in relatively obscure web fiction.)
I think a lot of the controversy / confusion around the original twitter poll was because many people were getting these points mixed up and not distinguishing between how people would answer from how (they thought) people should answer, based on their own understanding of game theory or decision theory or altruism or whatever.