This is my reaction too. This is a decision involving Omega in which the right thing to do is not update based on new information. In decisions not involving Omega, you do want to update. It doesn’t matter whether the new information is of an anthropic nature or not.
This is my reaction too. This is a decision involving Omega in which the right thing to do is not update based on new information. In decisions not involving Omega, you do want to update. It doesn’t matter whether the new information is of an anthropic nature or not.
Yeah, thought about it a bit more, and still seems to be more akin to “paradox of counterfactual mugging” than “paradox of anthropic reasoning”
To me, confusing bits of anthropic reasoning would more come into play via stuff like “aumann agreement theorem vs anthropic reasoning”