I think I’ll have to sit and reread this a couple times, but my INITIAL thought is “Isn’t the apparent inconsistancy here qualitatively similar to the situation with a counterfactual mugging?”
This is my reaction too. This is a decision involving Omega in which the right thing to do is not update based on new information. In decisions not involving Omega, you do want to update. It doesn’t matter whether the new information is of an anthropic nature or not.
I think I’ll have to sit and reread this a couple times, but my INITIAL thought is “Isn’t the apparent inconsistancy here qualitatively similar to the situation with a counterfactual mugging?”
This is my reaction too. This is a decision involving Omega in which the right thing to do is not update based on new information. In decisions not involving Omega, you do want to update. It doesn’t matter whether the new information is of an anthropic nature or not.
Yeah, thought about it a bit more, and still seems to be more akin to “paradox of counterfactual mugging” than “paradox of anthropic reasoning”
To me, confusing bits of anthropic reasoning would more come into play via stuff like “aumann agreement theorem vs anthropic reasoning”