People seem to have pretty strong opinions about Newcomb’s Problem. I don’t have any trouble believing that a superintelligence could scan you and predict your reaction with 99.5% accuracy.
I mean, a superintelligence would have no trouble at all predicting that I would one-box… even if I hadn’t encountered the problem before, I suspect.
Ultimately you either interpret “superintelligence” as being sufficient to predict your reaction with significant accuracy, or not. If not, the problem is just a straightforward probability question, as explained here, and becomes uninteresting.
Otherwise, if you interpret “superintelligence” as being sufficient to predict your reaction with significant accuracy (especially a high accuracy like >99.5%), the words of this sentence...
And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.
...simply mean “One-box to win, with high confidence.”
Summary: After disambiguating “superintelligence” (making the belief that Omega is a superintelligence pay rent), Newcomb’s problem turns into either a straightforward probability question or a fairly simple issue of rearranging the words in equivalent ways to make the winning answer readily apparent.
People seem to have pretty strong opinions about Newcomb’s Problem. I don’t have any trouble believing that a superintelligence could scan you and predict your reaction with 99.5% accuracy.
I mean, a superintelligence would have no trouble at all predicting that I would one-box… even if I hadn’t encountered the problem before, I suspect.
Ultimately you either interpret “superintelligence” as being sufficient to predict your reaction with significant accuracy, or not. If not, the problem is just a straightforward probability question, as explained here, and becomes uninteresting.
Otherwise, if you interpret “superintelligence” as being sufficient to predict your reaction with significant accuracy (especially a high accuracy like >99.5%), the words of this sentence...
...simply mean “One-box to win, with high confidence.”
Summary: After disambiguating “superintelligence” (making the belief that Omega is a superintelligence pay rent), Newcomb’s problem turns into either a straightforward probability question or a fairly simple issue of rearranging the words in equivalent ways to make the winning answer readily apparent.