If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn’t ignore.
Very true. I think one major problem with anthropic reasoning is we tend to treat the combined/average outcome of the group (e.g. all humans) as the expectation of the outcome for an individual. So when there is no strategy to make rational conclusions from the individual perspectives, we automatically try to make conclusions, if followed by everyone in the group, would be most beneficial for the whole group.
Then what’s the appropriate group? It would lead to different conclusions. Thus the debates, most notably SSA vs SIA.
Very true. I think one major problem with anthropic reasoning is we tend to treat the combined/average outcome of the group (e.g. all humans) as the expectation of the outcome for an individual. So when there is no strategy to make rational conclusions from the individual perspectives, we automatically try to make conclusions, if followed by everyone in the group, would be most beneficial for the whole group.
Then what’s the appropriate group? It would lead to different conclusions. Thus the debates, most notably SSA vs SIA.