That’s not true. The SSA agents are only told about the conditions of the experiment after they’re created and have already opened their eyes.
Consequently, isn’t it equally valid for me to begin the SSA probability calculation with those two agents already excluded from my reference class?
Doesn’t this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
Doesn’t this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
Yep. The old reference class problem. Which is why, back when I thought anthropic probabilities were meaningful, I was an SIAer.
Anyway, if your reference class consists of people who have seen “this is not room X”, then “divided responsibility” is no longer 1⁄3, and you probably have to go whole UTD.
But SIA also has some issues with order of information, though it’s connected with decisions
Can you illustrate how the order of information matters there? As far as I can tell it doesn’t, and hence it’s just an issue with failing to consider counterfactual utility, which SIA ignores by default. It’s definitely a relevant criticism of using anthropic probabilities in your decisions, because failing to consider counterfactual utility results in dynamic inconsistency, but I don’t think it’s as strong as the associated criticism of SSA.
Anyway, if your reference class consists of people who have seen “this is not room X”, then “divided responsibility” is no longer 1⁄3, and you probably have to go whole UTD.
If divided responsibility is not 1⁄3, what do those words even mean? How can you claim that only two agents are responsible for the decision when it’s quite clear that the decision is a linked decision shared by three agents?
If you’re taking “divided responsibility” to mean “divide by the number of agents used as an input to the SIA-probability of the relevant world”, then your argument that SSA+total = SIA+divided boils down to this:
“If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
That argument is, of course, trivially true because the the number of agents you’re dividing by will be the ratio between the SIA odds and the SSA odds of that world. If you allow me to choose arbitrary constants to scale the utility of each possible world, then of course your decisions will not be fully specified by the probabilities, no matter what decision theory you happen to use. Besides, you haven’t even given me any reason why it makes any sense at all to measure my decisions in terms of “responsibility” rather than simply using my utility function in the first place.
On the other hand, if, for example, you could justify why it would make sense to include a notion of “divided responsibility” in my decision theory, then that argument would tell me that SSA+total responsibility must clearly be conceptually the wrong way to do things because it uses total responsibility instead.
All in all, I do think anthropic probabilities are suspect for use in a decision theory because
They result in reflective inconsistency by failing to consider counterfactuals.
It doesn’t make sense to use them for decisions when the probabilities could depend upon the decisions (as in the Absent-Minded Driver)
That said, even if you can’t use those probabilities in your decision theory there is still a remaining question of “to what degree should I anticipate X, given my state of information”. I don’t think your argument on “divided responsibility” holds up, but even if it did the question on subjective anticipation remains unanswered.
“If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
Yes, that’s essentially it. However, the idea of divided responsibility has been proposed before (though not in those terms) - it’s not just a hack I made up. Basic idea is, if ten people need to vote unanimously “yes” for a policy that benefits them all, do they each consider that their vote made the difference between the policy and no policy, or that it contributed a tenth of that difference? Divided responsibility actually makes more intuitive sense in many ways, because we could replace the unanimity requirement with “you cause 1⁄10 of the policy to happen” and it’s hard to see what the difference is (assuming that everyone votes identically).
But all these approaches (SIA and SSA and whatever concept of responsibility) fall apart when you consider that UDT allows you to reason about agents that will make the same decision as you, even if they’re not subjectively indistinguishable from you. Anthropic probability can’t deal with these—worse, it can’t even consider counterfactual universes where “you” don’t exist, and doesn’t distinguish well between identical copies of you that have access to distinct, non-decision relevant information.
the question on subjective anticipation remains unanswered.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful. If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
OK, the “you cause 1⁄10 of the policy to happen” argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, “divided responsibility” would give the right answer in Psy-Kosh’s non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh’s non-anthropic problem but SIA+divided does not.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful.
As do I. But, as Manfred has said, I don’t think that being confused about it is sufficient reason to believe it’s meaningless.
The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).
That’s not true. The SSA agents are only told about the conditions of the experiment after they’re created and have already opened their eyes.
Consequently, isn’t it equally valid for me to begin the SSA probability calculation with those two agents already excluded from my reference class?
Doesn’t this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
Yep. The old reference class problem. Which is why, back when I thought anthropic probabilities were meaningful, I was an SIAer.
But SIA also has some issues with order of information, though it’s connected with decisions ( http://lesswrong.com/lw/4fl/dead_men_tell_tales_falling_out_of_love_with_sia/ ).
Anyway, if your reference class consists of people who have seen “this is not room X”, then “divided responsibility” is no longer 1⁄3, and you probably have to go whole UTD.
Can you illustrate how the order of information matters there? As far as I can tell it doesn’t, and hence it’s just an issue with failing to consider counterfactual utility, which SIA ignores by default. It’s definitely a relevant criticism of using anthropic probabilities in your decisions, because failing to consider counterfactual utility results in dynamic inconsistency, but I don’t think it’s as strong as the associated criticism of SSA.
If divided responsibility is not 1⁄3, what do those words even mean? How can you claim that only two agents are responsible for the decision when it’s quite clear that the decision is a linked decision shared by three agents?
If you’re taking “divided responsibility” to mean “divide by the number of agents used as an input to the SIA-probability of the relevant world”, then your argument that SSA+total = SIA+divided boils down to this: “If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
That argument is, of course, trivially true because the the number of agents you’re dividing by will be the ratio between the SIA odds and the SSA odds of that world. If you allow me to choose arbitrary constants to scale the utility of each possible world, then of course your decisions will not be fully specified by the probabilities, no matter what decision theory you happen to use. Besides, you haven’t even given me any reason why it makes any sense at all to measure my decisions in terms of “responsibility” rather than simply using my utility function in the first place.
On the other hand, if, for example, you could justify why it would make sense to include a notion of “divided responsibility” in my decision theory, then that argument would tell me that SSA+total responsibility must clearly be conceptually the wrong way to do things because it uses total responsibility instead.
All in all, I do think anthropic probabilities are suspect for use in a decision theory because
They result in reflective inconsistency by failing to consider counterfactuals.
It doesn’t make sense to use them for decisions when the probabilities could depend upon the decisions (as in the Absent-Minded Driver)
That said, even if you can’t use those probabilities in your decision theory there is still a remaining question of “to what degree should I anticipate X, given my state of information”. I don’t think your argument on “divided responsibility” holds up, but even if it did the question on subjective anticipation remains unanswered.
Yes, that’s essentially it. However, the idea of divided responsibility has been proposed before (though not in those terms) - it’s not just a hack I made up. Basic idea is, if ten people need to vote unanimously “yes” for a policy that benefits them all, do they each consider that their vote made the difference between the policy and no policy, or that it contributed a tenth of that difference? Divided responsibility actually makes more intuitive sense in many ways, because we could replace the unanimity requirement with “you cause 1⁄10 of the policy to happen” and it’s hard to see what the difference is (assuming that everyone votes identically).
But all these approaches (SIA and SSA and whatever concept of responsibility) fall apart when you consider that UDT allows you to reason about agents that will make the same decision as you, even if they’re not subjectively indistinguishable from you. Anthropic probability can’t deal with these—worse, it can’t even consider counterfactual universes where “you” don’t exist, and doesn’t distinguish well between identical copies of you that have access to distinct, non-decision relevant information.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful. If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
OK, the “you cause 1⁄10 of the policy to happen” argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, “divided responsibility” would give the right answer in Psy-Kosh’s non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh’s non-anthropic problem but SIA+divided does not.
As do I. But, as Manfred has said, I don’t think that being confused about it is sufficient reason to believe it’s meaningless.
The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).