Yes, I haven’t studied the LW sequence in detail, but I’ve read the arxiv.org draft, so I’m familiar with the argument. :-) (Are there important things in the LW sequence that are not in the draft, so that I should read that too? I remember you did something where agents had both a selfish and a global component to their utility function, that wasn’t in the draft...) But from the techreport I got the impression that you were talking about actual SSA-using agents, not about the emergence of SSA-like behavior from ADT; e.g. on the last page, you say
Finally, it should be noted that a lot of anthropic decision problems can be solved without
needing to work out the anthropic probabilities and impact responsibility at all (see for instance
the approach in (Armstrong, 2012)).
which sounds as if you’re contrasting two different approaches in the techreport and in the draft, not as if they’re both about the same thing?
[And sorry for misspelling you earlier—corrected now, I don’t know what happened there...]
What I really meant is—the things in the tech report are fine as far as they go, but the Anthropic decision paper is where the real results are.
I agree with you that the isomorphism only holds if your reference class is suitable (and for selfish agents, you need to mess around with precommitments). The tech report does make some simplifying assumptions (as it’s point was not to find the full condition for rigorous isomorphism results, but to illustrate that anthropic probabilities are not enough on their own).
Yes, I haven’t studied the LW sequence in detail, but I’ve read the arxiv.org draft, so I’m familiar with the argument. :-) (Are there important things in the LW sequence that are not in the draft, so that I should read that too? I remember you did something where agents had both a selfish and a global component to their utility function, that wasn’t in the draft...) But from the techreport I got the impression that you were talking about actual SSA-using agents, not about the emergence of SSA-like behavior from ADT; e.g. on the last page, you say
which sounds as if you’re contrasting two different approaches in the techreport and in the draft, not as if they’re both about the same thing?
[And sorry for misspelling you earlier—corrected now, I don’t know what happened there...]
What I really meant is—the things in the tech report are fine as far as they go, but the Anthropic decision paper is where the real results are.
I agree with you that the isomorphism only holds if your reference class is suitable (and for selfish agents, you need to mess around with precommitments). The tech report does make some simplifying assumptions (as it’s point was not to find the full condition for rigorous isomorphism results, but to illustrate that anthropic probabilities are not enough on their own).
Thanks!