1. Imagine this Chinese roulette, but you forget the number N of how many games you already played. In that case, it is more reasonable to bet on a smaller number.
If we add here the idea that firing the gun has only a probability p to broke the vase, we arrive to SSA-counterargument to anthropic shadow recently suggested by Jessika. That is, most of the observations will happen before the risk event.
2 Imagine another variant. You know that the number of played games N=6 and now you should guess n—the number of chambers typically loaded. In that case it is more reasonable to expect n=1 than n=5.
This is SIA-counterargument against anthropic shadow: if there are many worlds, you more likely to find yourself in the world with weakest anthropic shadow.
Note that both counterarguments do not kill anthropic shadow completely, but rather shift it to the most mild variant allowed. I explored it here.
And it all now looks like a variant of Sleeping beauty, btw.
3. However, the most important application of anthropic shadow is the idea of underestimating of the fragility of our environment. Based on previous Ns one can think that the vase is unbrokenable. This may cause a global risks in the case of blindly performing some physical experiment in fragile environment, eg geo-engineering.
Imagine this Chinese roulette, but you forget the number N of how many games you already played. In that case, it is more reasonable to bet on a smaller number.
Not sure I follow—the idea is that the parameter b gets randomized at every game, so why would this change the optimal strategy? Each game is its own story.
If we add here the idea that firing the gun has only a probability p to broke the vase, we arrive to SSA-counterargument to anthropic shadow recently suggested by Jessika. That is, most of the observations will happen before the risk event.
I think this is roughly equivalent to what the blanks do? Now you just have a probability of b+(1−p)(1−b) to not break the vase in case of a loaded chamber.
2 Imagine another variant. You know that the number of played games N=6 and now you should guess n—the number of chambers typically loaded. In that case it is more reasonable to expect n=1 than n=5.
Only if you know that the games lasted more than one turn! Also this only makes sense if you assume that the distribution that n is picked from before each game is not uniform.
But yeah, things can get plenty tricky if you assume some kind of non-uniform distribution between worlds etc. But I’m not sure with so many possibilities what would be, specifically, an interesting one to explore more in depth.
We can add anthropic flavor to this game:
1. Imagine this Chinese roulette, but you forget the number N of how many games you already played. In that case, it is more reasonable to bet on a smaller number.
If we add here the idea that firing the gun has only a probability p to broke the vase, we arrive to SSA-counterargument to anthropic shadow recently suggested by Jessika. That is, most of the observations will happen before the risk event.
2 Imagine another variant. You know that the number of played games N=6 and now you should guess n—the number of chambers typically loaded. In that case it is more reasonable to expect n=1 than n=5.
This is SIA-counterargument against anthropic shadow: if there are many worlds, you more likely to find yourself in the world with weakest anthropic shadow.
Note that both counterarguments do not kill anthropic shadow completely, but rather shift it to the most mild variant allowed. I explored it here.
And it all now looks like a variant of Sleeping beauty, btw.
3. However, the most important application of anthropic shadow is the idea of underestimating of the fragility of our environment. Based on previous Ns one can think that the vase is unbrokenable. This may cause a global risks in the case of blindly performing some physical experiment in fragile environment, eg geo-engineering.
Not sure I follow—the idea is that the parameter b gets randomized at every game, so why would this change the optimal strategy? Each game is its own story.
I think this is roughly equivalent to what the blanks do? Now you just have a probability of b+(1−p)(1−b) to not break the vase in case of a loaded chamber.
Only if you know that the games lasted more than one turn! Also this only makes sense if you assume that the distribution that n is picked from before each game is not uniform.
But yeah, things can get plenty tricky if you assume some kind of non-uniform distribution between worlds etc. But I’m not sure with so many possibilities what would be, specifically, an interesting one to explore more in depth.
My point was to add uncertainty about one’s location relative to the game situation—this is how it turns into more typical anthropic questions.