You might also assign different values to red-choosers and blue-choosers (one commenter I saw said they wouldn’t want to live in a world populated only by people who picked red) but I’m going to ignore that complication for now.
Roko has also mentioned they think people choose blue for being bozos and I think it’s fair to assume from their comments that they care less about bozos than smart people.
I’m very interested in seeing the calculations where you assign different utilities to people depending on their choice (and possibly, also depending on yours, like if you only value people who choose like you).
I’m not sure how I would work it out. The problem is that presumably you don’t value one group more because they chose blue (it’s because they’re more altruistic in general) or because they chose red (it’s because they’re better at game theory or something). The choice is just an indicator of how much value you would put on them if you knew more about them. Since you already know a lot about the distribution of types of people in the world and how much you like them, the Bayesian update doesn’t really apply in the same way. It only works on what pill they’ll take because everyone is deciding with no knowledge of what the others will decide.
In the specific case where you don’t feel altruistic towards people who chose blue specifically because of a personal responsibility argument (“that’s their own fault”), then trivially you should choose red. Otherwise, I’m pretty confused about how to handle it. I think maybe only your level of altruism towards the blue choosers matters.
Roko has also mentioned they think people choose blue for being bozos and I think it’s fair to assume from their comments that they care less about bozos than smart people.
I’m very interested in seeing the calculations where you assign different utilities to people depending on their choice (and possibly, also depending on yours, like if you only value people who choose like you).
I’m not sure how I would work it out. The problem is that presumably you don’t value one group more because they chose blue (it’s because they’re more altruistic in general) or because they chose red (it’s because they’re better at game theory or something). The choice is just an indicator of how much value you would put on them if you knew more about them. Since you already know a lot about the distribution of types of people in the world and how much you like them, the Bayesian update doesn’t really apply in the same way. It only works on what pill they’ll take because everyone is deciding with no knowledge of what the others will decide.
In the specific case where you don’t feel altruistic towards people who chose blue specifically because of a personal responsibility argument (“that’s their own fault”), then trivially you should choose red. Otherwise, I’m pretty confused about how to handle it. I think maybe only your level of altruism towards the blue choosers matters.