If Eliezer knows that your prediction is based on good evidence that he would one-box, then that screens off the dependence between your prediction and his decision, so he should two-box.
Surely the same applies to Omega. By hypothesis, Eliezer knows that Omega is reliable, and since Eliezer does not believe in magic, he deduces that Omega’s prediction is based on good evidence, even if Omega doesn’t say anything about the evidence.
My only reason for being unsure that Eliezer would one-box against me is that there may be some reflexivity issue I haven’t thought of, but I don’t think this one works.
One issue is that I’m not going around making these offers to everyone, but the only role that that plays in the original problem is to establish Omega’s reliability without Newcomb having to explain how Omega does it. But I don’t think it matters where the confidence in Omega’s reliability comes from, as long as it is there.
If you know that Omega came to a conclusion about you based on things you wrote on the Internet, and you know that the things you wrote imply you will one-box, then you are free to two-box.
Edit: basically the thing you have to ask is, if you know where Omega’s model of you comes from, is that model like you to a sufficient extent that whatever you decide to do, the model will also do?
Ah, but the thing you DON’T know is that Omega isn’t cheating. Cheating LOOKS like magic but isn’t. Implicit in my point, certainly part of my thinking, is that unless you understand deeply and for sure HOW the trick is done, you can expect the trick will be done on you. So unless you can think of a million dollar upside to not getting the million dollars, you should let yourself be the mark of the conman Omega since your role seems to include getting a million dollars for whatever reasons Omega has to do that.
You should only two box if you understand Omega’s trick so well that you are sure you can break it, i.e. that you will get the million dollars anyway. And the value of breaking Omega’s trick is that the world doesn’t need more successful con men.
Considering the likelihood of being confronted by a fake Omega rather than a real one, it would seem a matter of great lack of foresight to not want to address this problem in coding your FAI.
Unless he figures you’re not an idiot and you already know that, in which case it’s better for him to have a rule that says “always one-box on Newcomb-like problems whenever the payoff for doing so exceeds n times the payoff for failed two-boxing” where n is a number (probably between 1.1 and 100) that represents the payment differences. Obviously, if he’s playing against something with no ability to predict his actions (e.g. a brick) he’s going to two-box no matter what. But a human with theory of mind is definitely not a brick and can predict his action with far better than random accuracy.
If Eliezer knows that your prediction is based on good evidence that he would one-box, then that screens off the dependence between your prediction and his decision, so he should two-box.
Surely the same applies to Omega. By hypothesis, Eliezer knows that Omega is reliable, and since Eliezer does not believe in magic, he deduces that Omega’s prediction is based on good evidence, even if Omega doesn’t say anything about the evidence.
My only reason for being unsure that Eliezer would one-box against me is that there may be some reflexivity issue I haven’t thought of, but I don’t think this one works.
One issue is that I’m not going around making these offers to everyone, but the only role that that plays in the original problem is to establish Omega’s reliability without Newcomb having to explain how Omega does it. But I don’t think it matters where the confidence in Omega’s reliability comes from, as long as it is there.
If you know that Omega came to a conclusion about you based on things you wrote on the Internet, and you know that the things you wrote imply you will one-box, then you are free to two-box.
Edit: basically the thing you have to ask is, if you know where Omega’s model of you comes from, is that model like you to a sufficient extent that whatever you decide to do, the model will also do?
Ah, but the thing you DON’T know is that Omega isn’t cheating. Cheating LOOKS like magic but isn’t. Implicit in my point, certainly part of my thinking, is that unless you understand deeply and for sure HOW the trick is done, you can expect the trick will be done on you. So unless you can think of a million dollar upside to not getting the million dollars, you should let yourself be the mark of the conman Omega since your role seems to include getting a million dollars for whatever reasons Omega has to do that.
You should only two box if you understand Omega’s trick so well that you are sure you can break it, i.e. that you will get the million dollars anyway. And the value of breaking Omega’s trick is that the world doesn’t need more successful con men.
Considering the likelihood of being confronted by a fake Omega rather than a real one, it would seem a matter of great lack of foresight to not want to address this problem in coding your FAI.
Unless he figures you’re not an idiot and you already know that, in which case it’s better for him to have a rule that says “always one-box on Newcomb-like problems whenever the payoff for doing so exceeds n times the payoff for failed two-boxing” where n is a number (probably between 1.1 and 100) that represents the payment differences. Obviously, if he’s playing against something with no ability to predict his actions (e.g. a brick) he’s going to two-box no matter what. But a human with theory of mind is definitely not a brick and can predict his action with far better than random accuracy.