I still don’t understand the fascination with this problem. A perfect predictor pretty strongly implies some form of determinism, right? If it predicts one-boxing and it’s perfect, you don’t actually have a choice—you are going to one-box, and justify it to yourself however you need to.
Thanks for this comment. I accidentally left a sentence out of the original post: “A good way to view this is that instead of asking what choice should the agent make, we will ask whether the agent made the best choice”
I still don’t understand the fascination with this problem. A perfect predictor pretty strongly implies some form of determinism, right? If it predicts one-boxing and it’s perfect, you don’t actually have a choice—you are going to one-box, and justify it to yourself however you need to.
Thanks for this comment. I accidentally left a sentence out of the original post: “A good way to view this is that instead of asking what choice should the agent make, we will ask whether the agent made the best choice”