So on the face of it it seems that the only accessible outcomes are:
original-X chooses “sim” and gets +1; all simulated copies also choose “sim” and get +0.2 (and then get destroyed?)
original-X chooses “not sim” and gets +0.9; no simulated copies are made
and it seems like in fact everyone does better to choose “sim” and will do so. This is also fairly clearly the best outcome on most plausible attitudes to simulated copies’ utility, though the scenario asks us to suppose that X doesn’t care about those.
I’m not sure what the point of this is, though. I’m not seeing anything paradoxical or confusing (except in so far as the very notion of simulated copies of oneself is confusing). It might be more interesting if the simulated copies get more utility when they choose “not sim” rather than less as in the description of the scenario, so that your best action depends on whether you think you’re in a simulation or not (and then if you expect to choose “sim”, you expect that most copies of you are simulations, in which case maybe you shouldn’t choose “sim”; and if you expect to choose “not sim”, you expect that you are the only copy, in which case maybe you should choose “sim”).
I’m wondering whether perhaps something like that was what pallas intended, and the current version just has “sim” and “not sim” switched at one point...
It’s tempting to say that, but I think pallas actually meant what he wrote. Basically, hitting “not sim” gets you a guaranteed 0.9 utility. Hitting “sim” gets you about 0.2 utility, getting closer as the number of copies increases. Even though each person strictly prefers “sim” to “not-sim,” and a CDT agent would choose sim, it appears that choosing “not-sim” gets you more expected utility.
Edit: not-sim has higher expected utility for an entirely selfish agent who does not know whether he is simulated or not, because his choice affects not only his utility payout, but also acasually affects his state of simulation. Of course, this depends on my interpretation of anthropics.
Oh, I see. Nice. Preferring “not sim” in this case feels rather like falling victim to Simpson’s paradox, but I’m not at all sure that’s not just a mistake on my part.
So on the face of it it seems that the only accessible outcomes are:
original-X chooses “sim” and gets +1; all simulated copies also choose “sim” and get +0.2 (and then get destroyed?)
original-X chooses “not sim” and gets +0.9; no simulated copies are made
and it seems like in fact everyone does better to choose “sim” and will do so. This is also fairly clearly the best outcome on most plausible attitudes to simulated copies’ utility, though the scenario asks us to suppose that X doesn’t care about those.
I’m not sure what the point of this is, though. I’m not seeing anything paradoxical or confusing (except in so far as the very notion of simulated copies of oneself is confusing). It might be more interesting if the simulated copies get more utility when they choose “not sim” rather than less as in the description of the scenario, so that your best action depends on whether you think you’re in a simulation or not (and then if you expect to choose “sim”, you expect that most copies of you are simulations, in which case maybe you shouldn’t choose “sim”; and if you expect to choose “not sim”, you expect that you are the only copy, in which case maybe you should choose “sim”).
I’m wondering whether perhaps something like that was what pallas intended, and the current version just has “sim” and “not sim” switched at one point...
It’s tempting to say that, but I think pallas actually meant what he wrote. Basically, hitting “not sim” gets you a guaranteed 0.9 utility. Hitting “sim” gets you about 0.2 utility, getting closer as the number of copies increases. Even though each person strictly prefers “sim” to “not-sim,” and a CDT agent would choose sim, it appears that choosing “not-sim” gets you more expected utility.
Edit: not-sim has higher expected utility for an entirely selfish agent who does not know whether he is simulated or not, because his choice affects not only his utility payout, but also acasually affects his state of simulation. Of course, this depends on my interpretation of anthropics.
Oh, I see. Nice. Preferring “not sim” in this case feels rather like falling victim to Simpson’s paradox, but I’m not at all sure that’s not just a mistake on my part.
Thanks for the explanation. I had no idea what was actually going on here.