The role that would normally be played by simulation is here played by a big evidential study of what people with different genes do. This is why it matters whether the people in the study are good decision-makers or not—only when the people in the study are in a position similar to my own do they fulfill this simulation-like role.
Yes, the idea is that they are sufficiently similar to you so that the study can be applied (but also sufficiently different to make it counter-intuitive to say it’s a simulation). The subjects of the study may be told that there already exists a study, so that their situation is equivalent to yours. It’s meant to be similar to the medical Newcomb problems in that regard.
I briefly considered the idea that TDT would see the study as a simulation, but discarded the possibility, because in that case the studies in classic medical Newcomb problems could also be seen as simulations of the agent to some degree. The “abstract computation that an agent implements” is a bit vague, anyway, I assume, but if one were willing to go this far, is it possible that TDT conflates with EDT?
Under the formulation that leads to one-boxing here, TDT will be very similar to EDT whenever the evidence is about the unknown output of your agent’s decision problem. They are both in some sense trying to “join the winning team”—EDT by expecting the winning-team action to make them have won, and TDT only in problems where what team you are on is identical to what action you take.
Yes, the idea is that they are sufficiently similar to you so that the study can be applied (but also sufficiently different to make it counter-intuitive to say it’s a simulation). The subjects of the study may be told that there already exists a study, so that their situation is equivalent to yours. It’s meant to be similar to the medical Newcomb problems in that regard.
I briefly considered the idea that TDT would see the study as a simulation, but discarded the possibility, because in that case the studies in classic medical Newcomb problems could also be seen as simulations of the agent to some degree. The “abstract computation that an agent implements” is a bit vague, anyway, I assume, but if one were willing to go this far, is it possible that TDT conflates with EDT?
Under the formulation that leads to one-boxing here, TDT will be very similar to EDT whenever the evidence is about the unknown output of your agent’s decision problem. They are both in some sense trying to “join the winning team”—EDT by expecting the winning-team action to make them have won, and TDT only in problems where what team you are on is identical to what action you take.