It seems earlier posts and your post have defined anthropic shadow differently in subtle but important ways. The earlier posts by Christopher and Jessica argued AS is invalid: that there should be updates given I survived. Your post argued AS is valid: that there are games where no new information gained while playing can change your strategy (no useful updates). The former is focusing on updates, the latter is focusing on strategy. These two positions are not mutually exclusive.
Personally, the concept of “useful update” seems situational. For example, say someone has a prior that leads him to conclude the optimal strategy is not to play the Chinese Roulette. However, he was forced to play several rounds regardless of what he thought. After surviving those rounds (say EEEEE), it might very well be that he updates his probability enough to change his strategy from no-play to play. That would be a useful update. And this “forced-to-play” kind of situation is quite relevant to existential risks, which anthropic discussions tend to focus on.
True enough, I guess. I do wonder how to reconcile the two views though, because the approach you describe that allows you to update in case of a basic game is actively worse for the second kind of game (the one with the blanks). In that case, using the approach I suggested actually produces a peaked probability distribution on b that eventually converges to the correct value (well, on average). Meanwhile just looking at survival produces exactly the same monotonically decaying power law. If the latter potentially is useful information, I wonder how one might integrate the two.
It seems earlier posts and your post have defined anthropic shadow differently in subtle but important ways. The earlier posts by Christopher and Jessica argued AS is invalid: that there should be updates given I survived. Your post argued AS is valid: that there are games where no new information gained while playing can change your strategy (no useful updates). The former is focusing on updates, the latter is focusing on strategy. These two positions are not mutually exclusive.
Personally, the concept of “useful update” seems situational. For example, say someone has a prior that leads him to conclude the optimal strategy is not to play the Chinese Roulette. However, he was forced to play several rounds regardless of what he thought. After surviving those rounds (say EEEEE), it might very well be that he updates his probability enough to change his strategy from no-play to play. That would be a useful update. And this “forced-to-play” kind of situation is quite relevant to existential risks, which anthropic discussions tend to focus on.
True enough, I guess. I do wonder how to reconcile the two views though, because the approach you describe that allows you to update in case of a basic game is actively worse for the second kind of game (the one with the blanks). In that case, using the approach I suggested actually produces a peaked probability distribution on b that eventually converges to the correct value (well, on average). Meanwhile just looking at survival produces exactly the same monotonically decaying power law. If the latter potentially is useful information, I wonder how one might integrate the two.