I think that argument is highly suspect, primarily because I see no reason why a notion of “responsibility” should have any bearing on your decision theory. Decision theory is about achieving your goals, not avoiding blame for failing.
However, even if we assume that we do include some notion of responsibility, I think that your argument is still incorrect. Consider this version of the incubator Sleeping Beauty problem, where two coins are flipped. HH ⇒ Sleeping Beauties created in Room 1, 2, and 3 HT ⇒ Sleeping Beauty created in Room 1 TH ⇒ Sleeping Beauty created in Room 2 TT ⇒ Sleeping Beauty created in Room 3 Moreover, in each room there is a sign. In Room 1 it is equally likely to say either “This is not Room 2” or “This is not Room 3″, and so on for each of the three rooms.
Now, each Sleeping Beauty is offered a choice between two coupons; each coupon gives the specified amount to their preferred charity (by assumption, utility is proportional to $ given to charity), but only if each of them chose the same coupon. The payoff looks like this: A ⇒ $12 if HH, $0 otherwise. B ⇒ $6 if HH, $2.40 otherwise.
I’m sure you see where this is going, but I’ll do the math anyway.
With SIA+divided responsibility, we have p(HH) = p(not HH) = 1⁄2 The responsibility is divided among 3 people in HH-world, and among 1 person otherwise, therefore EU(A) = (1/2)(1/3)$12 = $2.00 EU(B) = (1/2)(1/3)$6 + (1/2)$2.40 = $2.20
With SSA+total responsibility, we have p(HH) = 1⁄3 p(not HH) = 2⁄3 EU(A) = (1/3)$12 = $4.00 EU(B) = (1/3)$6 + (2/3)$2.40 = $3.60
So SIA+divided responsibility suggests choosing B, but SSA+total responsibility suggests choosing A.
Proof: before opening their eyes, the SSA agents divide probability as: 1⁄12 HH1 (HH and they are in room 1), 1⁄12 HH2, 1⁄12 HH3, 1⁄4 HT, 1⁄4 TH, 1⁄4 TT.
Upon seeing a sign saying “this is not room X”, they remove one possible agent from the HH world, and one possible world from the remaining three. So this gives odds of HH:¬HH of (1/12+1/12):(1/4+1/4) = 1/6:1/2, or 1:3, which is a probability of 1⁄4.
This means that SSA+divided responsibility says EU(A) is $3, and EU(B) is $3.3. - exactly the same ratios as the first setup, with B as the best choice.
That’s not true. The SSA agents are only told about the conditions of the experiment after they’re created and have already opened their eyes.
Consequently, isn’t it equally valid for me to begin the SSA probability calculation with those two agents already excluded from my reference class?
Doesn’t this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
Doesn’t this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
Yep. The old reference class problem. Which is why, back when I thought anthropic probabilities were meaningful, I was an SIAer.
Anyway, if your reference class consists of people who have seen “this is not room X”, then “divided responsibility” is no longer 1⁄3, and you probably have to go whole UTD.
But SIA also has some issues with order of information, though it’s connected with decisions
Can you illustrate how the order of information matters there? As far as I can tell it doesn’t, and hence it’s just an issue with failing to consider counterfactual utility, which SIA ignores by default. It’s definitely a relevant criticism of using anthropic probabilities in your decisions, because failing to consider counterfactual utility results in dynamic inconsistency, but I don’t think it’s as strong as the associated criticism of SSA.
Anyway, if your reference class consists of people who have seen “this is not room X”, then “divided responsibility” is no longer 1⁄3, and you probably have to go whole UTD.
If divided responsibility is not 1⁄3, what do those words even mean? How can you claim that only two agents are responsible for the decision when it’s quite clear that the decision is a linked decision shared by three agents?
If you’re taking “divided responsibility” to mean “divide by the number of agents used as an input to the SIA-probability of the relevant world”, then your argument that SSA+total = SIA+divided boils down to this:
“If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
That argument is, of course, trivially true because the the number of agents you’re dividing by will be the ratio between the SIA odds and the SSA odds of that world. If you allow me to choose arbitrary constants to scale the utility of each possible world, then of course your decisions will not be fully specified by the probabilities, no matter what decision theory you happen to use. Besides, you haven’t even given me any reason why it makes any sense at all to measure my decisions in terms of “responsibility” rather than simply using my utility function in the first place.
On the other hand, if, for example, you could justify why it would make sense to include a notion of “divided responsibility” in my decision theory, then that argument would tell me that SSA+total responsibility must clearly be conceptually the wrong way to do things because it uses total responsibility instead.
All in all, I do think anthropic probabilities are suspect for use in a decision theory because
They result in reflective inconsistency by failing to consider counterfactuals.
It doesn’t make sense to use them for decisions when the probabilities could depend upon the decisions (as in the Absent-Minded Driver)
That said, even if you can’t use those probabilities in your decision theory there is still a remaining question of “to what degree should I anticipate X, given my state of information”. I don’t think your argument on “divided responsibility” holds up, but even if it did the question on subjective anticipation remains unanswered.
“If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
Yes, that’s essentially it. However, the idea of divided responsibility has been proposed before (though not in those terms) - it’s not just a hack I made up. Basic idea is, if ten people need to vote unanimously “yes” for a policy that benefits them all, do they each consider that their vote made the difference between the policy and no policy, or that it contributed a tenth of that difference? Divided responsibility actually makes more intuitive sense in many ways, because we could replace the unanimity requirement with “you cause 1⁄10 of the policy to happen” and it’s hard to see what the difference is (assuming that everyone votes identically).
But all these approaches (SIA and SSA and whatever concept of responsibility) fall apart when you consider that UDT allows you to reason about agents that will make the same decision as you, even if they’re not subjectively indistinguishable from you. Anthropic probability can’t deal with these—worse, it can’t even consider counterfactual universes where “you” don’t exist, and doesn’t distinguish well between identical copies of you that have access to distinct, non-decision relevant information.
the question on subjective anticipation remains unanswered.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful. If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
OK, the “you cause 1⁄10 of the policy to happen” argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, “divided responsibility” would give the right answer in Psy-Kosh’s non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh’s non-anthropic problem but SIA+divided does not.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful.
As do I. But, as Manfred has said, I don’t think that being confused about it is sufficient reason to believe it’s meaningless.
The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).
I think that argument is highly suspect, primarily because I see no reason why a notion of “responsibility” should have any bearing on your decision theory. Decision theory is about achieving your goals, not avoiding blame for failing.
However, even if we assume that we do include some notion of responsibility, I think that your argument is still incorrect. Consider this version of the incubator Sleeping Beauty problem, where two coins are flipped.
HH ⇒ Sleeping Beauties created in Room 1, 2, and 3
HT ⇒ Sleeping Beauty created in Room 1
TH ⇒ Sleeping Beauty created in Room 2
TT ⇒ Sleeping Beauty created in Room 3
Moreover, in each room there is a sign. In Room 1 it is equally likely to say either “This is not Room 2” or “This is not Room 3″, and so on for each of the three rooms.
Now, each Sleeping Beauty is offered a choice between two coupons; each coupon gives the specified amount to their preferred charity (by assumption, utility is proportional to $ given to charity), but only if each of them chose the same coupon. The payoff looks like this:
A ⇒ $12 if HH, $0 otherwise.
B ⇒ $6 if HH, $2.40 otherwise.
I’m sure you see where this is going, but I’ll do the math anyway.
With SIA+divided responsibility, we have
p(HH) = p(not HH) = 1⁄2
The responsibility is divided among 3 people in HH-world, and among 1 person otherwise, therefore
EU(A) = (1/2)(1/3)$12 = $2.00
EU(B) = (1/2)(1/3)$6 + (1/2)$2.40 = $2.20
With SSA+total responsibility, we have
p(HH) = 1⁄3
p(not HH) = 2⁄3
EU(A) = (1/3)$12 = $4.00
EU(B) = (1/3)$6 + (2/3)$2.40 = $3.60
So SIA+divided responsibility suggests choosing B, but SSA+total responsibility suggests choosing A.
The SSA probability of HH is 1⁄4, not 1⁄3.
Proof: before opening their eyes, the SSA agents divide probability as: 1⁄12 HH1 (HH and they are in room 1), 1⁄12 HH2, 1⁄12 HH3, 1⁄4 HT, 1⁄4 TH, 1⁄4 TT.
Upon seeing a sign saying “this is not room X”, they remove one possible agent from the HH world, and one possible world from the remaining three. So this gives odds of HH:¬HH of (1/12+1/12):(1/4+1/4) = 1/6:1/2, or 1:3, which is a probability of 1⁄4.
This means that SSA+divided responsibility says EU(A) is $3, and EU(B) is $3.3. - exactly the same ratios as the first setup, with B as the best choice.
That’s not true. The SSA agents are only told about the conditions of the experiment after they’re created and have already opened their eyes.
Consequently, isn’t it equally valid for me to begin the SSA probability calculation with those two agents already excluded from my reference class?
Doesn’t this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
Yep. The old reference class problem. Which is why, back when I thought anthropic probabilities were meaningful, I was an SIAer.
But SIA also has some issues with order of information, though it’s connected with decisions ( http://lesswrong.com/lw/4fl/dead_men_tell_tales_falling_out_of_love_with_sia/ ).
Anyway, if your reference class consists of people who have seen “this is not room X”, then “divided responsibility” is no longer 1⁄3, and you probably have to go whole UTD.
Can you illustrate how the order of information matters there? As far as I can tell it doesn’t, and hence it’s just an issue with failing to consider counterfactual utility, which SIA ignores by default. It’s definitely a relevant criticism of using anthropic probabilities in your decisions, because failing to consider counterfactual utility results in dynamic inconsistency, but I don’t think it’s as strong as the associated criticism of SSA.
If divided responsibility is not 1⁄3, what do those words even mean? How can you claim that only two agents are responsible for the decision when it’s quite clear that the decision is a linked decision shared by three agents?
If you’re taking “divided responsibility” to mean “divide by the number of agents used as an input to the SIA-probability of the relevant world”, then your argument that SSA+total = SIA+divided boils down to this: “If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
That argument is, of course, trivially true because the the number of agents you’re dividing by will be the ratio between the SIA odds and the SSA odds of that world. If you allow me to choose arbitrary constants to scale the utility of each possible world, then of course your decisions will not be fully specified by the probabilities, no matter what decision theory you happen to use. Besides, you haven’t even given me any reason why it makes any sense at all to measure my decisions in terms of “responsibility” rather than simply using my utility function in the first place.
On the other hand, if, for example, you could justify why it would make sense to include a notion of “divided responsibility” in my decision theory, then that argument would tell me that SSA+total responsibility must clearly be conceptually the wrong way to do things because it uses total responsibility instead.
All in all, I do think anthropic probabilities are suspect for use in a decision theory because
They result in reflective inconsistency by failing to consider counterfactuals.
It doesn’t make sense to use them for decisions when the probabilities could depend upon the decisions (as in the Absent-Minded Driver)
That said, even if you can’t use those probabilities in your decision theory there is still a remaining question of “to what degree should I anticipate X, given my state of information”. I don’t think your argument on “divided responsibility” holds up, but even if it did the question on subjective anticipation remains unanswered.
Yes, that’s essentially it. However, the idea of divided responsibility has been proposed before (though not in those terms) - it’s not just a hack I made up. Basic idea is, if ten people need to vote unanimously “yes” for a policy that benefits them all, do they each consider that their vote made the difference between the policy and no policy, or that it contributed a tenth of that difference? Divided responsibility actually makes more intuitive sense in many ways, because we could replace the unanimity requirement with “you cause 1⁄10 of the policy to happen” and it’s hard to see what the difference is (assuming that everyone votes identically).
But all these approaches (SIA and SSA and whatever concept of responsibility) fall apart when you consider that UDT allows you to reason about agents that will make the same decision as you, even if they’re not subjectively indistinguishable from you. Anthropic probability can’t deal with these—worse, it can’t even consider counterfactual universes where “you” don’t exist, and doesn’t distinguish well between identical copies of you that have access to distinct, non-decision relevant information.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful. If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
OK, the “you cause 1⁄10 of the policy to happen” argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, “divided responsibility” would give the right answer in Psy-Kosh’s non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh’s non-anthropic problem but SIA+divided does not.
As do I. But, as Manfred has said, I don’t think that being confused about it is sufficient reason to believe it’s meaningless.
The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).