With Newcomb’s Problem, I always wonder how much the issue is confounded by formulations like “Omega predicted correctly in 99% of past cases”, where given some normally reasonable assumptions (even really good predictors probably aren’t running a literal copy of your mind), it’s easy to conclude you’re being reflective enough about the decision to be in a small minority of unpredictable people. I would be interested in seeing statistics on a version of Newcomb’s Problem that explicitly said Omega predicts correctly all of the time because it runs an identical copy of you and your environment.
I also wonder if anyone has argued that you-the-atoms should two-box, you-the-algorithm should one-box, and which entity “you” refers to is just a semantic issue.
With Newcomb’s Problem, I always wonder how much the issue is confounded by formulations like “Omega predicted correctly in 99% of past cases”, where given some normally reasonable assumptions (even really good predictors probably aren’t running a literal copy of your mind), it’s easy to conclude you’re being reflective enough about the decision to be in a small minority of unpredictable people. I would be interested in seeing statistics on a version of Newcomb’s Problem that explicitly said Omega predicts correctly all of the time because it runs an identical copy of you and your environment.
I also wonder if anyone has argued that you-the-atoms should two-box, you-the-algorithm should one-box, and which entity “you” refers to is just a semantic issue.