Because of the context of the original idea (an anthropic question), I think the idea is that all ten of you are equivalent for decision making purposes, and you can be confident that whatever you do is what all the others will do in the same situation.
Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like “updating on consciousness” or “updating on the fact that you exist” in the first place; indeed, I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won’t be any easier than just answering it for the case where all the deciders are copies of the same person.
I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge.
Sounds right, if you unpack “implied by its state of knowledge” to not mean “only consider possible worlds consistent with observations”. Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty.
(This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)
Because of the context of the original idea (an anthropic question), I think the idea is that all ten of you are equivalent for decision making purposes, and you can be confident that whatever you do is what all the others will do in the same situation.
Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like “updating on consciousness” or “updating on the fact that you exist” in the first place; indeed, I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won’t be any easier than just answering it for the case where all the deciders are copies of the same person.
(Though I’m still going to try to solve it.)
Sounds right, if you unpack “implied by its state of knowledge” to not mean “only consider possible worlds consistent with observations”. Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty.
(This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)