I’m not sure how I would work it out. The problem is that presumably you don’t value one group more because they chose blue (it’s because they’re more altruistic in general) or because they chose red (it’s because they’re better at game theory or something). The choice is just an indicator of how much value you would put on them if you knew more about them. Since you already know a lot about the distribution of types of people in the world and how much you like them, the Bayesian update doesn’t really apply in the same way. It only works on what pill they’ll take because everyone is deciding with no knowledge of what the others will decide.
In the specific case where you don’t feel altruistic towards people who chose blue specifically because of a personal responsibility argument (“that’s their own fault”), then trivially you should choose red. Otherwise, I’m pretty confused about how to handle it. I think maybe only your level of altruism towards the blue choosers matters.
I’m not sure how I would work it out. The problem is that presumably you don’t value one group more because they chose blue (it’s because they’re more altruistic in general) or because they chose red (it’s because they’re better at game theory or something). The choice is just an indicator of how much value you would put on them if you knew more about them. Since you already know a lot about the distribution of types of people in the world and how much you like them, the Bayesian update doesn’t really apply in the same way. It only works on what pill they’ll take because everyone is deciding with no knowledge of what the others will decide.
In the specific case where you don’t feel altruistic towards people who chose blue specifically because of a personal responsibility argument (“that’s their own fault”), then trivially you should choose red. Otherwise, I’m pretty confused about how to handle it. I think maybe only your level of altruism towards the blue choosers matters.