You see two boxes and you can either take both boxes, or take only box B. Box A is transparent and contains $1000. Box B contains a visible number, say 1033. The Bank of Omega, which operates by very clear and transparent mechanisms, will pay you $1M if this number is prime, and $0 if it is composite. Omega is known to select prime numbers for Box B whenever Omega predicts that you will take only Box B; and conversely select composite numbers if Omega predicts that you will take both boxes. Omega has previously predicted correctly in 99.9% of cases.
Attempting to translate this English description into a program-segment which I can do algebra on, I get a type error. I can only resolve the type error by changing a vital aspect of the rules, and I have several options for how to do so and no prior provided, so this question is unanswerable as written. This is a very common problem with decision theory work, and I think everyone should make a habit of writing decision theory questions as statically-typed programs, not as prose.
The issue is that, in order to predict whether you will take one or both boxes, Omega must supply all the inputs to your simulation, including the number that you see; and one of the inputs, is Omega’s own output. Replacing the number with a boolean that you don’t get to look at would resolve the issue, and you almost do that, by saying that you’re not allowed to factor the number, but the problem still fails to compile if you entangle your decision with any property of the number that’s even a little bit related to primeness.
the problem still fails to compile if you entangle your decision with any property of the number that’s even a little bit related to primeness
That doesn’t seem completely right to me. For example, oddness is related to primeness. If I wanted to do the opposite of what Omega predicted, I might try to one-box on even numbers and two-box on odd numbers. But then Omega can just give me an odd number that isn’t prime. More generally, if we drop the lottery and simplify the problem to just transparent Newcomb’s with prime/composite, then for any player strategy that isn’t exactly “two-box if prime, one-box if composite”, Omega can find a way to be right.
Another problem is that Omega might have multiple ways to be right, e.g. if if your strategy is “one-box if prime, two-box if composite” or “one-box if odd, two-box if even”. But then it seems that regardless of how Omega chooses to break ties, as long as it predicts correctly, one-boxers cannot lose out to other strategies. That applies to the original problem as well, so I’m in favor of one-boxing there (see wedrifid’s and Carl’s comments for details).
Overall I agree that giving an underspecified problem and papering it over with “you don’t have a calculator” isn’t very nice, and it would be better to have well-specified problems in the future. For example, when Gary was describing the transparent Newcomb’s problem, he was careful to say that in the simulation both boxes are full. In our case the problem turned out to be kinda sorta solvable in the end, but I guess it was just luck.
Yep, this all seems correct; the player does not have enough degrees of freedom to prevent there from being a fixpoint, and it is possible to prove for all interpretations that no strategy does better than tying with the simple one-box strategy. But I feel, very strongly, that allowing this particular kind of ambiguity into decision theory problems is a reliably losing move. That road leads only to confusion, and that particular mistake is responsible for many (possibly most) previous failures to figure out decision theory.
The issue is that, in order to predict whether you will take one or both boxes, Omega must supply all the inputs to your simulation, including the number that you see
There doesn’t need to be a concrete simulation where all variables attain canonical values. Instead, some variables can retain their symbolic definitions, including as results of recursive calls. Such programs can sometimes be evaluated even without explicitly posing the problem of the existence of consistent variable assignments, especially if you are allowed to make some optimizing transformations during the evaluation (eliminating variables without evaluating them).
On one hand, I sympathize with your argument. When Gary was designing the transparent Newcomb’s problem, he was careful to point out that the simulation sees both boxes as full.
On the other hand, can you point out exactly where Carl’s solution proposed on the Facebook thread disagrees with your claim that the problem is unsolvable?
Attempting to translate this English description into a program-segment which I can do algebra on, I get a type error. I can only resolve the type error by changing a vital aspect of the rules, and I have several options for how to do so and no prior provided, so this question is unanswerable as written. This is a very common problem with decision theory work, and I think everyone should make a habit of writing decision theory questions as statically-typed programs, not as prose.
The issue is that, in order to predict whether you will take one or both boxes, Omega must supply all the inputs to your simulation, including the number that you see; and one of the inputs, is Omega’s own output. Replacing the number with a boolean that you don’t get to look at would resolve the issue, and you almost do that, by saying that you’re not allowed to factor the number, but the problem still fails to compile if you entangle your decision with any property of the number that’s even a little bit related to primeness.
That doesn’t seem completely right to me. For example, oddness is related to primeness. If I wanted to do the opposite of what Omega predicted, I might try to one-box on even numbers and two-box on odd numbers. But then Omega can just give me an odd number that isn’t prime. More generally, if we drop the lottery and simplify the problem to just transparent Newcomb’s with prime/composite, then for any player strategy that isn’t exactly “two-box if prime, one-box if composite”, Omega can find a way to be right.
Another problem is that Omega might have multiple ways to be right, e.g. if if your strategy is “one-box if prime, two-box if composite” or “one-box if odd, two-box if even”. But then it seems that regardless of how Omega chooses to break ties, as long as it predicts correctly, one-boxers cannot lose out to other strategies. That applies to the original problem as well, so I’m in favor of one-boxing there (see wedrifid’s and Carl’s comments for details).
Overall I agree that giving an underspecified problem and papering it over with “you don’t have a calculator” isn’t very nice, and it would be better to have well-specified problems in the future. For example, when Gary was describing the transparent Newcomb’s problem, he was careful to say that in the simulation both boxes are full. In our case the problem turned out to be kinda sorta solvable in the end, but I guess it was just luck.
Yep, this all seems correct; the player does not have enough degrees of freedom to prevent there from being a fixpoint, and it is possible to prove for all interpretations that no strategy does better than tying with the simple one-box strategy. But I feel, very strongly, that allowing this particular kind of ambiguity into decision theory problems is a reliably losing move. That road leads only to confusion, and that particular mistake is responsible for many (possibly most) previous failures to figure out decision theory.
There doesn’t need to be a concrete simulation where all variables attain canonical values. Instead, some variables can retain their symbolic definitions, including as results of recursive calls. Such programs can sometimes be evaluated even without explicitly posing the problem of the existence of consistent variable assignments, especially if you are allowed to make some optimizing transformations during the evaluation (eliminating variables without evaluating them).
It’s also more than an unknown boolean (prime or not) because you can check if it’s the same number as the lottery output.
On one hand, I sympathize with your argument. When Gary was designing the transparent Newcomb’s problem, he was careful to point out that the simulation sees both boxes as full.
On the other hand, can you point out exactly where Carl’s solution proposed on the Facebook thread disagrees with your claim that the problem is unsolvable?