No matter what the game theory says, a non-zero number of people will choose blue and thus die under this equilibrium. This fact—that getting to >50% blue is the only way to save absolutely everyone—is enough for me to consider choosing blue and hope that others reason the same (which, in a self-fulfilling way, strengthens the case for choosing blue).
That would be questioning the assumption that your cost function as an altruist should be linear in the number of lives lost. I’m not sure why you would question this assumption, though; it seems rather unnatural to make this a concave function, which is what you would need for your logic to work.
Unless I expect the pool of responders to be 100% rational and choose red, then I should expect some to choose blue. Since I (and presumably other responders) do expect some to choose blue, that makes >50% blue the preferred outcome. Universal red is just not a realistic outcome.
Whether or not I choose blue then depends on factors like how I value the lives of others compared to mine, the number of responders, etc—as in the equations in your post.
Emperically, as GeneSmith points out, something is wrong with WalterL’s suggestion that red is the obvious choice no matter your reasoning. Applying his logic would push the twitter poll away from the realistically ideal outcome of >50% blue and closer to the worst possible outcome (51% red).
A LOT depends on how you model the counterfactual of this poll being real and having consequences. I STRONGLY predict that 90+% of people who are given the poll, along with enough evidence that they believe the consequences are real, will pick red. Personal safety aligns with back-of-the-envelope calculations here—unless you can be pretty sure of meeting the blue threshold, you’re basically committing suicide by picking blue. And if it’s well over 50% blue without you, you may as well choose red then, too.
There IS a superrationality argument for blue, in the case where you model that you’re sufficiently similar to 50%+ of people that you will naturally vote in a block due to shared priors and models. Then voting blue to save those dissimilar people who voted red may be justified. I don’t believe this holds for myself, nor any sizeable subset of humans.
I don’t share your intuition here. I think many people would see blue as the “band together” option and would have confidence that others will do the same. For the average responder, the question would reduce to “choose blue to signal trust in humanity, choose red to signal selfish cowardice”.
“Innate faith in human compassion, especially in a crisis” is the co-ordination mechanism, and I think there is pretty strong support for that notion if you look at how we respond to crises in real life and how we depict them in fiction. That is the narrative we tell ourselves at least, but narrative is what’s important here.
I would be surprised if blue was less than 30%, and would predict around 60%.
Both all red and all blue are rational if I can expect everyone else to follow the same logic as me. Which one you prefer depends only on amount of disagreement you expect and value you place on other lives compared to your own. In any world that goes “I am perfectly rational, everyone else is too, and thus they will do the same as me”, it’s irrelevant what you pick.
No matter what the game theory says, a non-zero number of people will choose blue and thus die under this equilibrium. This fact—that getting to >50% blue is the only way to save absolutely everyone—is enough for me to consider choosing blue and hope that others reason the same (which, in a self-fulfilling way, strengthens the case for choosing blue).
That would be questioning the assumption that your cost function as an altruist should be linear in the number of lives lost. I’m not sure why you would question this assumption, though; it seems rather unnatural to make this a concave function, which is what you would need for your logic to work.
I’m not quite sure what you mean by that.
Unless I expect the pool of responders to be 100% rational and choose red, then I should expect some to choose blue. Since I (and presumably other responders) do expect some to choose blue, that makes >50% blue the preferred outcome. Universal red is just not a realistic outcome.
Whether or not I choose blue then depends on factors like how I value the lives of others compared to mine, the number of responders, etc—as in the equations in your post.
Emperically, as GeneSmith points out, something is wrong with WalterL’s suggestion that red is the obvious choice no matter your reasoning. Applying his logic would push the twitter poll away from the realistically ideal outcome of >50% blue and closer to the worst possible outcome (51% red).
A LOT depends on how you model the counterfactual of this poll being real and having consequences. I STRONGLY predict that 90+% of people who are given the poll, along with enough evidence that they believe the consequences are real, will pick red. Personal safety aligns with back-of-the-envelope calculations here—unless you can be pretty sure of meeting the blue threshold, you’re basically committing suicide by picking blue. And if it’s well over 50% blue without you, you may as well choose red then, too.
There IS a superrationality argument for blue, in the case where you model that you’re sufficiently similar to 50%+ of people that you will naturally vote in a block due to shared priors and models. Then voting blue to save those dissimilar people who voted red may be justified. I don’t believe this holds for myself, nor any sizeable subset of humans.
I don’t share your intuition here. I think many people would see blue as the “band together” option and would have confidence that others will do the same. For the average responder, the question would reduce to “choose blue to signal trust in humanity, choose red to signal selfish cowardice”.
“Innate faith in human compassion, especially in a crisis” is the co-ordination mechanism, and I think there is pretty strong support for that notion if you look at how we respond to crises in real life and how we depict them in fiction. That is the narrative we tell ourselves at least, but narrative is what’s important here.
I would be surprised if blue was less than 30%, and would predict around 60%.
Both all red and all blue are rational if I can expect everyone else to follow the same logic as me. Which one you prefer depends only on amount of disagreement you expect and value you place on other lives compared to your own. In any world that goes “I am perfectly rational, everyone else is too, and thus they will do the same as me”, it’s irrelevant what you pick.