The agents I’m considering one-box, as shown in this post. This is because the agent logically knows (as a consequence of their beliefs) that, if they take 1 box, they get $1,000,000, and if they take both boxes, they get $1000. That is, the agent believes the contents of the box are a logical consequence of their own action.
The agents I’m considering one-box, as shown in this post. This is because the agent logically knows (as a consequence of their beliefs) that, if they take 1 box, they get $1,000,000, and if they take both boxes, they get $1000. That is, the agent believes the contents of the box are a logical consequence of their own action.