If we’re assuming that all of the deciders are perfectly correlated, or (equivalently?) that for any good argument for whatever decision you end up making, all the other deciders will think of the same argument, then I’m just going to pretend we’re talking about copies of the same person, which, as I’ve argued, seems to require the same kind of reasoning anyway, and makes it a little bit simpler to talk about than if we have to speak as though that everyone is a different person but will reliably make the same decision.
Anyway:
Something is being double-counted here. Or there’s some kind of sleight-of-hand that vaguely reminds me of this problem, where it appears that something is being misplaced but you’re actually just being misdirected by the phrasing of the problem. (Not that I accuse anyone of intentionally doing that in any of the versions of this problem.) I can’t quite pin it down, but it seems like whatever it is that would (under any circumstance) lead you to assign a .9 decision-theoretic weighting to the tails-world is already accounted for by the fact that there are 9 of you (i.e. 9 who’ve been told that they’re deciders) in that world. I’m not sure how to formally express what I’m getting at, but I think this is moving in the right direction. Imagine a tree of agents existing as a result of the coin flip; the heads branch contains one decider and nine non-deciders; the tails branch contains nine deciders and one non-decider. And each decider needs to have its own judgment of decision-theoretic weighting… but that varies depending on what kind of decision it is. If each one assigns .9 weight to the possibility that it is in the tails branch, then that would be relevant if every agent’s decision were to be counted individually (say, if each one had to guess either heads or tails, and would get $1 if correct; they’d do better guessing tails than by flipping a coin to decide), but in this case the decision is collective and only counted once — so there’s no reason to count the multiple copies of you as being relevant to the decision in the first place. It’s like if in tails-world you run a (constant) program nine times and do something based on the output, and in heads-world you run it once and do something else based on the output. The structure of the problem doesn’t actually imply that the algorithm needs to know how many times it’s being executed. I think that’s the misdirection.
(Edit: Sorry, that was sort of a ramble/stream-of-consciousness. The part from “If each one assigns...” onward is the part I currently consider relevant and correct.)
It looks like the double-count is that you treat yourself as an autonomous agent when you update on the evidence of being a decider, but as an agent of a perfectly coordinated movement when measuring the payoffs. The fact that you get the right answer when dividing the payoffs in the 9-decider case by 9 points in this direction.
If we’re assuming that all of the deciders are perfectly correlated, or (equivalently?) that for any good argument for whatever decision you end up making, all the other deciders will think of the same argument, then I’m just going to pretend we’re talking about copies of the same person, which, as I’ve argued, seems to require the same kind of reasoning anyway, and makes it a little bit simpler to talk about than if we have to speak as though that everyone is a different person but will reliably make the same decision.
Anyway:
Something is being double-counted here. Or there’s some kind of sleight-of-hand that vaguely reminds me of this problem, where it appears that something is being misplaced but you’re actually just being misdirected by the phrasing of the problem. (Not that I accuse anyone of intentionally doing that in any of the versions of this problem.) I can’t quite pin it down, but it seems like whatever it is that would (under any circumstance) lead you to assign a .9 decision-theoretic weighting to the tails-world is already accounted for by the fact that there are 9 of you (i.e. 9 who’ve been told that they’re deciders) in that world. I’m not sure how to formally express what I’m getting at, but I think this is moving in the right direction. Imagine a tree of agents existing as a result of the coin flip; the heads branch contains one decider and nine non-deciders; the tails branch contains nine deciders and one non-decider. And each decider needs to have its own judgment of decision-theoretic weighting… but that varies depending on what kind of decision it is. If each one assigns .9 weight to the possibility that it is in the tails branch, then that would be relevant if every agent’s decision were to be counted individually (say, if each one had to guess either heads or tails, and would get $1 if correct; they’d do better guessing tails than by flipping a coin to decide), but in this case the decision is collective and only counted once — so there’s no reason to count the multiple copies of you as being relevant to the decision in the first place. It’s like if in tails-world you run a (constant) program nine times and do something based on the output, and in heads-world you run it once and do something else based on the output. The structure of the problem doesn’t actually imply that the algorithm needs to know how many times it’s being executed. I think that’s the misdirection.
(Edit: Sorry, that was sort of a ramble/stream-of-consciousness. The part from “If each one assigns...” onward is the part I currently consider relevant and correct.)
It looks like the double-count is that you treat yourself as an autonomous agent when you update on the evidence of being a decider, but as an agent of a perfectly coordinated movement when measuring the payoffs. The fact that you get the right answer when dividing the payoffs in the 9-decider case by 9 points in this direction.