Perhaps the main point of my original post is to be very detail-oriented about what a perspective is, and how ontology relativizes to a perspective.
I have no argument that in most circumstances this difference in perspectives is essential. However, if you are talking about decision theories, the agents who do not believe that their actions are determined just because the predictor knows this (definitely knows, by definition, not just believes), those agents’s algorithms end up two-boxing, because they believe that “their actions are not determined,” and so two-boxing is the higher-utility choice. Unless I’m missing something in your argument again. But if not, then my point is that this relativization does not make a better decision-making algorithm.
The agents I’m considering one-box, as shown in this post. This is because the agent logically knows (as a consequence of their beliefs) that, if they take 1 box, they get $1,000,000, and if they take both boxes, they get $1000. That is, the agent believes the contents of the box are a logical consequence of their own action.
I have no argument that in most circumstances this difference in perspectives is essential. However, if you are talking about decision theories, the agents who do not believe that their actions are determined just because the predictor knows this (definitely knows, by definition, not just believes), those agents’s algorithms end up two-boxing, because they believe that “their actions are not determined,” and so two-boxing is the higher-utility choice. Unless I’m missing something in your argument again. But if not, then my point is that this relativization does not make a better decision-making algorithm.
The agents I’m considering one-box, as shown in this post. This is because the agent logically knows (as a consequence of their beliefs) that, if they take 1 box, they get $1,000,000, and if they take both boxes, they get $1000. That is, the agent believes the contents of the box are a logical consequence of their own action.