Well… for whatever it’s worth, the case I assume is (3).
“Rice’s Theorem” prohibits Omega from doing this with all possible computations, but not with humans. It’s probably not even all that difficult: people seem strongly attached to their opinions about Newcomb’s Problem, so their actual move might not be too difficult to predict. Any mind that has an understandable reason for the move it finally makes, is not all that difficult to simulate at a high-level; you are doing it every time you imagine what it would do!
Omega is assumed to be in a superior position, but doesn’t really need to be. I mean, I have no trouble imagining Omega as described—Omega figures out the decision I come to, then acts accordingly. Until I actually come to a decision, I don’t know what Omega has already done—but of course my decision is simple: I take only box B. End of scenario.
If you’re trying to figure out what Omega will do first—well, you’re just doing that so that you can take both boxes, right? You just want to figure out what Omega does “first”, and then take both boxes anyway. So Omega knows that, regardless of how much you insist that you want to compute Omega “first”, and Omega leaves box B empty. You realize this and take both boxes. End of scenario again.
You may have some odd ideas left about free will. Omega can not only predict you, but probably do it without much trouble. Some humans might be able to take a pretty good guess too. Re: free will, see relevant posts, e.g. this.
But this is an ancient dilemma in decision theory (much like free will in philosophy), of which one should Google “causal decision theory”, “evidential decision theory”, and “Newcomblike” for enlightenment.
My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
Otherwise I one box.
Reasoning, it should be fairly obvious from this strategy that I am likely to one box, predicting Omega being hard. So if I can tell Omega is likely to predict this and I can predict Omega accurately, I’ll then two box.
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Not sure this will work, I’d like to try to do the math at some point.
My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
…
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Dunno, you’d have to pay me a lot more than $1000 to go to all that trouble. Doesn’t seem rational to do all that work just to get an extra $1000 and a temporary feeling of superiority.
Well… for whatever it’s worth, the case I assume is (3).
“Rice’s Theorem” prohibits Omega from doing this with all possible computations, but not with humans. It’s probably not even all that difficult: people seem strongly attached to their opinions about Newcomb’s Problem, so their actual move might not be too difficult to predict. Any mind that has an understandable reason for the move it finally makes, is not all that difficult to simulate at a high-level; you are doing it every time you imagine what it would do!
Omega is assumed to be in a superior position, but doesn’t really need to be. I mean, I have no trouble imagining Omega as described—Omega figures out the decision I come to, then acts accordingly. Until I actually come to a decision, I don’t know what Omega has already done—but of course my decision is simple: I take only box B. End of scenario.
If you’re trying to figure out what Omega will do first—well, you’re just doing that so that you can take both boxes, right? You just want to figure out what Omega does “first”, and then take both boxes anyway. So Omega knows that, regardless of how much you insist that you want to compute Omega “first”, and Omega leaves box B empty. You realize this and take both boxes. End of scenario again.
You may have some odd ideas left about free will. Omega can not only predict you, but probably do it without much trouble. Some humans might be able to take a pretty good guess too. Re: free will, see relevant posts, e.g. this.
But this is an ancient dilemma in decision theory (much like free will in philosophy), of which one should Google “causal decision theory”, “evidential decision theory”, and “Newcomblike” for enlightenment.
My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
Otherwise I one box.
Reasoning, it should be fairly obvious from this strategy that I am likely to one box, predicting Omega being hard. So if I can tell Omega is likely to predict this and I can predict Omega accurately, I’ll then two box.
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Not sure this will work, I’d like to try to do the math at some point.
Dunno, you’d have to pay me a lot more than $1000 to go to all that trouble. Doesn’t seem rational to do all that work just to get an extra $1000 and a temporary feeling of superiority.
I dunno. I think I could make a ‘machine learning program’ that can predict a test set of ‘every guess out of 1,000,000 was right’ pretty quickly.