you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying “yea” gives 0.9*1000 + 0.1*100 = 910 expected donation.
I’m not sure if this is relevant to the overall nature of the problem, but in this instance, the term 0.9*1000 is incorrect because you don’t know if every other decider is going to be reasoning the same way. If you decide on “yea” on that basis, and the coin came up tails, and one of the other deciders says “nay”, then the donation is $0.
Is it possible to insert the assumption that the deciders will always reason identically (and, thus, that their decisions will be perfectly correlated) without essentially turning it back into an anthropic problem?
I’m not sure if this is relevant either, but I’m also not sure that such an assumption is needed. Note that failing to coordinate is the worst possible outcome—worse than successfully coordinating on any answer. Imagine that you inhabit case 2: you see a good argument for “yea”, but no equally good argument for “nay”, and there’s no possible benefit to saying “nay” unless everyone else sees something that you’re not seeing. Framed like this, choosing “yea” sounds reasonable, no?
There’s no particular way I see to coordinate on a “yea” answer. You don’t have any ability to coordinate with others while you’re answering questions, and “nay” appears to be the better bet before the problem starts.
It’s not uncommon to assume that everyone in a problem like this thinks in the same way you do, but I think making that assumption in this case would reduce it to an entirely different and less interesting problem—mainly because it renders the zero in the payoff matrix irrelevant if you choose a deterministic solution.
Because of the context of the original idea (an anthropic question), I think the idea is that all ten of you are equivalent for decision making purposes, and you can be confident that whatever you do is what all the others will do in the same situation.
Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like “updating on consciousness” or “updating on the fact that you exist” in the first place; indeed, I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won’t be any easier than just answering it for the case where all the deciders are copies of the same person.
I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge.
Sounds right, if you unpack “implied by its state of knowledge” to not mean “only consider possible worlds consistent with observations”. Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty.
(This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)
I’m not sure if this is relevant to the overall nature of the problem, but in this instance, the term 0.9*1000 is incorrect because you don’t know if every other decider is going to be reasoning the same way. If you decide on “yea” on that basis, and the coin came up tails, and one of the other deciders says “nay”, then the donation is $0.
Is it possible to insert the assumption that the deciders will always reason identically (and, thus, that their decisions will be perfectly correlated) without essentially turning it back into an anthropic problem?
I’m not sure if this is relevant either, but I’m also not sure that such an assumption is needed. Note that failing to coordinate is the worst possible outcome—worse than successfully coordinating on any answer. Imagine that you inhabit case 2: you see a good argument for “yea”, but no equally good argument for “nay”, and there’s no possible benefit to saying “nay” unless everyone else sees something that you’re not seeing. Framed like this, choosing “yea” sounds reasonable, no?
There’s no particular way I see to coordinate on a “yea” answer. You don’t have any ability to coordinate with others while you’re answering questions, and “nay” appears to be the better bet before the problem starts.
It’s not uncommon to assume that everyone in a problem like this thinks in the same way you do, but I think making that assumption in this case would reduce it to an entirely different and less interesting problem—mainly because it renders the zero in the payoff matrix irrelevant if you choose a deterministic solution.
Because of the context of the original idea (an anthropic question), I think the idea is that all ten of you are equivalent for decision making purposes, and you can be confident that whatever you do is what all the others will do in the same situation.
Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like “updating on consciousness” or “updating on the fact that you exist” in the first place; indeed, I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won’t be any easier than just answering it for the case where all the deciders are copies of the same person.
(Though I’m still going to try to solve it.)
Sounds right, if you unpack “implied by its state of knowledge” to not mean “only consider possible worlds consistent with observations”. Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty.
(This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)