My take on this is the following: It’s easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace “disposition” with “source code” and replace “change your disposition to do some action X” to “rewrite your source code so that it does action X”. Of course it would still want to incorporate the probability of a glitch as someone else already suggested.
If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it’s disposition).
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?
My take on this is the following: It’s easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace “disposition” with “source code” and replace “change your disposition to do some action X” to “rewrite your source code so that it does action X”. Of course it would still want to incorporate the probability of a glitch as someone else already suggested.
If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it’s disposition).
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?