Anthropic reasoning not working or not making sense in many cases is closer to being a standard position on LW (for example). The standard trick for making anthropic problems less confusing is to pose them as decision problems instead of as problems about probabilities. This way, when there appears to be no natural way of assigning probabilities (to instances of an agent) that’s useful for understanding the situation, we are not forced to endlessly debate which way of assigning them anyway is “the right one”.
Anthropic reasoning not working or not making sense in many cases is closer to being a standard position on LW (for example). The standard trick for making anthropic problems less confusing is to pose them as decision problems instead of as problems about probabilities. This way, when there appears to be no natural way of assigning probabilities (to instances of an agent) that’s useful for understanding the situation, we are not forced to endlessly debate which way of assigning them anyway is “the right one”.