No, I still don’t get why adding in the ith digit of pi clause changes Newcome’s problem at all. If omega says you’ll one-box and you two-box then omega was wrong, plain and simple. The ith digit of pi is an independent clause. I don’t see how one’s desire to make i=0 by two-boxing after already getting the million is any different than one wanting to make omega wrong by two-boxing after getting the million. If you are the type of person who, after getting the million thinks, “Gee, I want i=0! I’ll two-box!” Then omega wouldn’t have given you the million to begin with. After determining that he would not give you the million, he’d look at the ith digit of pi and either put the million in or not. You two-boxing has nothing to do with i.
If D=false and E=true and there’s $1M in the box and I two-box, then (in the particular Newcomb’s variant described above) the predictor is not wrong. The predictor correctly computed that (D xor E) is true, and set up the box accordingly, as the rules of this particular variant prescribe.
Yes- but your two-boxing didn’t cause i=0, rather the million was there because i=0. I’m saying that if (D or E) = true and you get a million dollars, and you two-box, then you haven’t caused E=0. E=0 before you two boxed, or if it did not, then omega was wrong and thought D = onebox, when in fact you are a two-boxer.
I’m in the same confused camp as Laura. This paragraph confuses me.
So E does indeed “depend on” C, in the particular sense you’ve specified. Thus, if I happen to have a strong enough preference that E output True, then TDT (as currently formulated) will tell me to two-box for the sake of that goal. But that’s the wrong decision, of course. In reality, I have no choice about the specified digit of pi.
Why is it the wrong decision? If Omega can perfectly predict the TDT and TDT sees 1 million dollars, then the TDT must be in a world that the ith digit of PI is 0. It is an unlikely world, to be sure.
Actually, you’re in a different camp than Laura: she agrees that it’s incorrect to two-box regardless of any preference you have about the specified digit of pi. :)
The easiest way to see why two-boxing is wrong is to imagine a large number of trials, with a different chooser, and a different value of i, for each trial. Suppose each chooser strongly prefers that their trial’s particular digit of pi be zero. The proportion of two-boxer simulations that end up with the digit equal to zero is no different than the proportion of one-boxer simulations that end up with the digit equal to zero (both are approximately .1). But the proportion of the one-boxer simulations that end up with an actual $1M is much higher (.9) than the proportion of two-boxer simulations that end up with an actual $1M (.1).
The proportion of two-boxer simulations that end up with the digit equal to zero is no different than the proportion of one-boxer simulations that end up with the digit equal to zero (both are approximately .1). But the proportion of the one-boxer simulations that end up with an actual $1M is much higher (.9) than the proportion of two-boxer simulations that end up with an actual $1M (.1).
But the proportion of two-boxers that saw $1M in the box that end up
with their digit being 0 and
with the $1M
is even higher (1). I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?
I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?
Not if a counterfactual consequence of two-boxing is that the large box (probably) would be empty (even though in fact it is not empty, as you can already see).
That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)
That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)
Don’t worry; I don’t want to do that :). If I recall the original transparent-boxes problem correctly, I agree with you on what to do in that case.
Just to check my memory, in the original problem, there are two transparent boxes, A and B. You see that A contains $1M and B contains $1000. You know that B necessarily contains $1000, but A would have contained $1M iff it were the case that you will decide to take only A. Otherwise, A would have been empty. The conclusion (with which I agree) is that you should take only A. Is that right? (If I’m misremembering something crucial, is there a link to the full description online?) [ETA: I see that you added a description to your post. My recollection above seems to be consistent with your description.]
In the original problem, if we use the “many choosers” heuristic, there are no choosers who two-box and yet who get the $1M. Therefore, you cannot “choose to be” one of them. This is why two-boxing should have no appeal to you.
In contrast, in your new problem, there are two-boxers who get the $1M and who get their E module to output True. So you can “choose to be” one of them, no? And since they’re the biggest winners, that’s what you should do, isn’t it?
Have some Omega thought experiments been one shot, never to be repeated type deals or is my memory incorrect?
Yes I wasn’t thinking through what would happen when the ith digit wasn’t 0. You can’t switch to one boxing in that case because you don’t know when that would be, or rather when you see an empty box you are forced to do the same as when you see a full box due to the way the game is set up.
Have some Omega thought experiments been one shot, never to be repeated type deals or is my memory incorrect?
Yes, and that’s the intent in this example as well. Still, it can be useful to look at the expected distribution of outcomes over a large enough number of trials that have the same structure, in order to infer the (counterfactual) probabilities that apply to a single trial.
No, I still don’t get why adding in the ith digit of pi clause changes Newcome’s problem at all. If omega says you’ll one-box and you two-box then omega was wrong, plain and simple. The ith digit of pi is an independent clause. I don’t see how one’s desire to make i=0 by two-boxing after already getting the million is any different than one wanting to make omega wrong by two-boxing after getting the million. If you are the type of person who, after getting the million thinks, “Gee, I want i=0! I’ll two-box!” Then omega wouldn’t have given you the million to begin with. After determining that he would not give you the million, he’d look at the ith digit of pi and either put the million in or not. You two-boxing has nothing to do with i.
If D=false and E=true and there’s $1M in the box and I two-box, then (in the particular Newcomb’s variant described above) the predictor is not wrong. The predictor correctly computed that (D xor E) is true, and set up the box accordingly, as the rules of this particular variant prescribe.
Yes- but your two-boxing didn’t cause i=0, rather the million was there because i=0. I’m saying that if (D or E) = true and you get a million dollars, and you two-box, then you haven’t caused E=0. E=0 before you two boxed, or if it did not, then omega was wrong and thought D = onebox, when in fact you are a two-boxer.
Everything you just said is true.*
Everything you just said is also consistent with everything I said in my original post.
*Except for one typo: you wrote (D or E) instead of (D xor E).
I’m in the same confused camp as Laura. This paragraph confuses me.
Why is it the wrong decision? If Omega can perfectly predict the TDT and TDT sees 1 million dollars, then the TDT must be in a world that the ith digit of PI is 0. It is an unlikely world, to be sure.
Actually, you’re in a different camp than Laura: she agrees that it’s incorrect to two-box regardless of any preference you have about the specified digit of pi. :)
The easiest way to see why two-boxing is wrong is to imagine a large number of trials, with a different chooser, and a different value of i, for each trial. Suppose each chooser strongly prefers that their trial’s particular digit of pi be zero. The proportion of two-boxer simulations that end up with the digit equal to zero is no different than the proportion of one-boxer simulations that end up with the digit equal to zero (both are approximately .1). But the proportion of the one-boxer simulations that end up with an actual $1M is much higher (.9) than the proportion of two-boxer simulations that end up with an actual $1M (.1).
But the proportion of two-boxers that saw $1M in the box that end up
with their digit being 0 and
with the $1M
is even higher (1). I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?
Not if a counterfactual consequence of two-boxing is that the large box (probably) would be empty (even though in fact it is not empty, as you can already see).
That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)
Don’t worry; I don’t want to do that :). If I recall the original transparent-boxes problem correctly, I agree with you on what to do in that case.
Just to check my memory, in the original problem, there are two transparent boxes, A and B. You see that A contains $1M and B contains $1000. You know that B necessarily contains $1000, but A would have contained $1M iff it were the case that you will decide to take only A. Otherwise, A would have been empty. The conclusion (with which I agree) is that you should take only A. Is that right? (If I’m misremembering something crucial, is there a link to the full description online?) [ETA: I see that you added a description to your post. My recollection above seems to be consistent with your description.]
In the original problem, if we use the “many choosers” heuristic, there are no choosers who two-box and yet who get the $1M. Therefore, you cannot “choose to be” one of them. This is why two-boxing should have no appeal to you.
In contrast, in your new problem, there are two-boxers who get the $1M and who get their E module to output True. So you can “choose to be” one of them, no? And since they’re the biggest winners, that’s what you should do, isn’t it?
Have some Omega thought experiments been one shot, never to be repeated type deals or is my memory incorrect?
Yes I wasn’t thinking through what would happen when the ith digit wasn’t 0. You can’t switch to one boxing in that case because you don’t know when that would be, or rather when you see an empty box you are forced to do the same as when you see a full box due to the way the game is set up.
Yes, and that’s the intent in this example as well. Still, it can be useful to look at the expected distribution of outcomes over a large enough number of trials that have the same structure, in order to infer the (counterfactual) probabilities that apply to a single trial.
Yes, they have. And most can be formulated as such as long as p(Omega is honest) is given as ‘high’ somewhere.