If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn’t ignore.
I think the Doomsday problem mostly fails because it’s screened off by other evidence. Yeah, we should update towards being around the middle of humanity by population, but we still can observe the world and make predictions based on what we actually see as to how long humanity is likely to last.
To re-use the initial framing, think of what strategy would produce the best prediction results for humanity as a whole on the Doomsday problem. Taking a lot of evidence into account other than just “I am probably near the middle of humanity” will produce way better discrimination, if not necessarily much better calibration.
(Of course, most people have historically been terrible at identifying when Doomsday will happen.)
If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn’t ignore.
Very true. I think one major problem with anthropic reasoning is we tend to treat the combined/average outcome of the group (e.g. all humans) as the expectation of the outcome for an individual. So when there is no strategy to make rational conclusions from the individual perspectives, we automatically try to make conclusions, if followed by everyone in the group, would be most beneficial for the whole group.
Then what’s the appropriate group? It would lead to different conclusions. Thus the debates, most notably SSA vs SIA.
If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn’t ignore.
I think the Doomsday problem mostly fails because it’s screened off by other evidence. Yeah, we should update towards being around the middle of humanity by population, but we still can observe the world and make predictions based on what we actually see as to how long humanity is likely to last.
To re-use the initial framing, think of what strategy would produce the best prediction results for humanity as a whole on the Doomsday problem. Taking a lot of evidence into account other than just “I am probably near the middle of humanity” will produce way better discrimination, if not necessarily much better calibration.
(Of course, most people have historically been terrible at identifying when Doomsday will happen.)
Very true. I think one major problem with anthropic reasoning is we tend to treat the combined/average outcome of the group (e.g. all humans) as the expectation of the outcome for an individual. So when there is no strategy to make rational conclusions from the individual perspectives, we automatically try to make conclusions, if followed by everyone in the group, would be most beneficial for the whole group.
Then what’s the appropriate group? It would lead to different conclusions. Thus the debates, most notably SSA vs SIA.