“If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!”
Yes, that’s essentially it. However, the idea of divided responsibility has been proposed before (though not in those terms) - it’s not just a hack I made up. Basic idea is, if ten people need to vote unanimously “yes” for a policy that benefits them all, do they each consider that their vote made the difference between the policy and no policy, or that it contributed a tenth of that difference? Divided responsibility actually makes more intuitive sense in many ways, because we could replace the unanimity requirement with “you cause 1⁄10 of the policy to happen” and it’s hard to see what the difference is (assuming that everyone votes identically).
But all these approaches (SIA and SSA and whatever concept of responsibility) fall apart when you consider that UDT allows you to reason about agents that will make the same decision as you, even if they’re not subjectively indistinguishable from you. Anthropic probability can’t deal with these—worse, it can’t even consider counterfactual universes where “you” don’t exist, and doesn’t distinguish well between identical copies of you that have access to distinct, non-decision relevant information.
the question on subjective anticipation remains unanswered.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful. If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
OK, the “you cause 1⁄10 of the policy to happen” argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, “divided responsibility” would give the right answer in Psy-Kosh’s non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh’s non-anthropic problem but SIA+divided does not.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful.
As do I. But, as Manfred has said, I don’t think that being confused about it is sufficient reason to believe it’s meaningless.
The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).
Yes, that’s essentially it. However, the idea of divided responsibility has been proposed before (though not in those terms) - it’s not just a hack I made up. Basic idea is, if ten people need to vote unanimously “yes” for a policy that benefits them all, do they each consider that their vote made the difference between the policy and no policy, or that it contributed a tenth of that difference? Divided responsibility actually makes more intuitive sense in many ways, because we could replace the unanimity requirement with “you cause 1⁄10 of the policy to happen” and it’s hard to see what the difference is (assuming that everyone votes identically).
But all these approaches (SIA and SSA and whatever concept of responsibility) fall apart when you consider that UDT allows you to reason about agents that will make the same decision as you, even if they’re not subjectively indistinguishable from you. Anthropic probability can’t deal with these—worse, it can’t even consider counterfactual universes where “you” don’t exist, and doesn’t distinguish well between identical copies of you that have access to distinct, non-decision relevant information.
Ah, subjective anticipation… That’s an interesting question. I often wonder whether it’s meaningful. If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
OK, the “you cause 1⁄10 of the policy to happen” argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, “divided responsibility” would give the right answer in Psy-Kosh’s non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh’s non-anthropic problem but SIA+divided does not.
As do I. But, as Manfred has said, I don’t think that being confused about it is sufficient reason to believe it’s meaningless.
The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).