I’m not clear at all what the problem is, but it seems to be symantic. It’s disturbing that this post can get 17 upvotes with almost no (2?) comments actually referring to what you’re saying- indicating that no one else here really gets the point either.
It seems you have an issue with the word ‘dependent’ and the definition that Eliezer provided. Under that definition, E (the ith digit of pi) would be dependent on C (our decision to one or two box) if we two-boxed and got a million dollars, because then we would know that E = 0, and we would not have known this if we had not two-boxed. So we can infer E from C, thus dependency. By Eliezer’s definition, which seems to be a special information-theoretical definition, I see no problem with this conclusion. The problem only seems to arise if you then take the intuitive definition of the word ‘dependent’ as meaning ‘contingent upon,’ as in ‘Breaking the egg is contingent upon my dropping it.’ Your symantic complain goes beyond newcome- by Eliezer’s definition of ‘dependent,’ the pH of water (E) is dependent upon our litmus testing it, since the result of the litmus test (C) allows us to infer the water’s actual pH. C lets us infer E, thus dependency.
Sorry, the above post omits some background information. If E “depends on” C in the particular sense defined, then the TDT algorithm mandates that when you “surgically alter” the output of C in the factored causal graph, you then you must correspondingly surgically alter the output of E in the graph.
So it’s not at all a matter of any intuitive connotation of “depends on”. Rather, “depends on”, in this context, is purely a technical term that designates a particular test that the TDT algorithm performs. And the algorithm’s prescribed use of that test culminates in the algorithm making the wrong decision in the case described above (namely, it tells me to two-box when I should one-box).
No, I still don’t get why adding in the ith digit of pi clause changes Newcome’s problem at all. If omega says you’ll one-box and you two-box then omega was wrong, plain and simple. The ith digit of pi is an independent clause. I don’t see how one’s desire to make i=0 by two-boxing after already getting the million is any different than one wanting to make omega wrong by two-boxing after getting the million. If you are the type of person who, after getting the million thinks, “Gee, I want i=0! I’ll two-box!” Then omega wouldn’t have given you the million to begin with. After determining that he would not give you the million, he’d look at the ith digit of pi and either put the million in or not. You two-boxing has nothing to do with i.
If D=false and E=true and there’s $1M in the box and I two-box, then (in the particular Newcomb’s variant described above) the predictor is not wrong. The predictor correctly computed that (D xor E) is true, and set up the box accordingly, as the rules of this particular variant prescribe.
Yes- but your two-boxing didn’t cause i=0, rather the million was there because i=0. I’m saying that if (D or E) = true and you get a million dollars, and you two-box, then you haven’t caused E=0. E=0 before you two boxed, or if it did not, then omega was wrong and thought D = onebox, when in fact you are a two-boxer.
I’m in the same confused camp as Laura. This paragraph confuses me.
So E does indeed “depend on” C, in the particular sense you’ve specified. Thus, if I happen to have a strong enough preference that E output True, then TDT (as currently formulated) will tell me to two-box for the sake of that goal. But that’s the wrong decision, of course. In reality, I have no choice about the specified digit of pi.
Why is it the wrong decision? If Omega can perfectly predict the TDT and TDT sees 1 million dollars, then the TDT must be in a world that the ith digit of PI is 0. It is an unlikely world, to be sure.
Actually, you’re in a different camp than Laura: she agrees that it’s incorrect to two-box regardless of any preference you have about the specified digit of pi. :)
The easiest way to see why two-boxing is wrong is to imagine a large number of trials, with a different chooser, and a different value of i, for each trial. Suppose each chooser strongly prefers that their trial’s particular digit of pi be zero. The proportion of two-boxer simulations that end up with the digit equal to zero is no different than the proportion of one-boxer simulations that end up with the digit equal to zero (both are approximately .1). But the proportion of the one-boxer simulations that end up with an actual $1M is much higher (.9) than the proportion of two-boxer simulations that end up with an actual $1M (.1).
The proportion of two-boxer simulations that end up with the digit equal to zero is no different than the proportion of one-boxer simulations that end up with the digit equal to zero (both are approximately .1). But the proportion of the one-boxer simulations that end up with an actual $1M is much higher (.9) than the proportion of two-boxer simulations that end up with an actual $1M (.1).
But the proportion of two-boxers that saw $1M in the box that end up
with their digit being 0 and
with the $1M
is even higher (1). I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?
I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?
Not if a counterfactual consequence of two-boxing is that the large box (probably) would be empty (even though in fact it is not empty, as you can already see).
That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)
That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)
Don’t worry; I don’t want to do that :). If I recall the original transparent-boxes problem correctly, I agree with you on what to do in that case.
Just to check my memory, in the original problem, there are two transparent boxes, A and B. You see that A contains $1M and B contains $1000. You know that B necessarily contains $1000, but A would have contained $1M iff it were the case that you will decide to take only A. Otherwise, A would have been empty. The conclusion (with which I agree) is that you should take only A. Is that right? (If I’m misremembering something crucial, is there a link to the full description online?) [ETA: I see that you added a description to your post. My recollection above seems to be consistent with your description.]
In the original problem, if we use the “many choosers” heuristic, there are no choosers who two-box and yet who get the $1M. Therefore, you cannot “choose to be” one of them. This is why two-boxing should have no appeal to you.
In contrast, in your new problem, there are two-boxers who get the $1M and who get their E module to output True. So you can “choose to be” one of them, no? And since they’re the biggest winners, that’s what you should do, isn’t it?
Have some Omega thought experiments been one shot, never to be repeated type deals or is my memory incorrect?
Yes I wasn’t thinking through what would happen when the ith digit wasn’t 0. You can’t switch to one boxing in that case because you don’t know when that would be, or rather when you see an empty box you are forced to do the same as when you see a full box due to the way the game is set up.
Have some Omega thought experiments been one shot, never to be repeated type deals or is my memory incorrect?
Yes, and that’s the intent in this example as well. Still, it can be useful to look at the expected distribution of outcomes over a large enough number of trials that have the same structure, in order to infer the (counterfactual) probabilities that apply to a single trial.
I’m not clear at all what the problem is, but it seems to be symantic. It’s disturbing that this post can get 17 upvotes with almost no (2?) comments actually referring to what you’re saying- indicating that no one else here really gets the point either.
It seems you have an issue with the word ‘dependent’ and the definition that Eliezer provided. Under that definition, E (the ith digit of pi) would be dependent on C (our decision to one or two box) if we two-boxed and got a million dollars, because then we would know that E = 0, and we would not have known this if we had not two-boxed. So we can infer E from C, thus dependency. By Eliezer’s definition, which seems to be a special information-theoretical definition, I see no problem with this conclusion. The problem only seems to arise if you then take the intuitive definition of the word ‘dependent’ as meaning ‘contingent upon,’ as in ‘Breaking the egg is contingent upon my dropping it.’ Your symantic complain goes beyond newcome- by Eliezer’s definition of ‘dependent,’ the pH of water (E) is dependent upon our litmus testing it, since the result of the litmus test (C) allows us to infer the water’s actual pH. C lets us infer E, thus dependency.
Sorry, the above post omits some background information. If E “depends on” C in the particular sense defined, then the TDT algorithm mandates that when you “surgically alter” the output of C in the factored causal graph, you then you must correspondingly surgically alter the output of E in the graph.
So it’s not at all a matter of any intuitive connotation of “depends on”. Rather, “depends on”, in this context, is purely a technical term that designates a particular test that the TDT algorithm performs. And the algorithm’s prescribed use of that test culminates in the algorithm making the wrong decision in the case described above (namely, it tells me to two-box when I should one-box).
No, I still don’t get why adding in the ith digit of pi clause changes Newcome’s problem at all. If omega says you’ll one-box and you two-box then omega was wrong, plain and simple. The ith digit of pi is an independent clause. I don’t see how one’s desire to make i=0 by two-boxing after already getting the million is any different than one wanting to make omega wrong by two-boxing after getting the million. If you are the type of person who, after getting the million thinks, “Gee, I want i=0! I’ll two-box!” Then omega wouldn’t have given you the million to begin with. After determining that he would not give you the million, he’d look at the ith digit of pi and either put the million in or not. You two-boxing has nothing to do with i.
If D=false and E=true and there’s $1M in the box and I two-box, then (in the particular Newcomb’s variant described above) the predictor is not wrong. The predictor correctly computed that (D xor E) is true, and set up the box accordingly, as the rules of this particular variant prescribe.
Yes- but your two-boxing didn’t cause i=0, rather the million was there because i=0. I’m saying that if (D or E) = true and you get a million dollars, and you two-box, then you haven’t caused E=0. E=0 before you two boxed, or if it did not, then omega was wrong and thought D = onebox, when in fact you are a two-boxer.
Everything you just said is true.*
Everything you just said is also consistent with everything I said in my original post.
*Except for one typo: you wrote (D or E) instead of (D xor E).
I’m in the same confused camp as Laura. This paragraph confuses me.
Why is it the wrong decision? If Omega can perfectly predict the TDT and TDT sees 1 million dollars, then the TDT must be in a world that the ith digit of PI is 0. It is an unlikely world, to be sure.
Actually, you’re in a different camp than Laura: she agrees that it’s incorrect to two-box regardless of any preference you have about the specified digit of pi. :)
The easiest way to see why two-boxing is wrong is to imagine a large number of trials, with a different chooser, and a different value of i, for each trial. Suppose each chooser strongly prefers that their trial’s particular digit of pi be zero. The proportion of two-boxer simulations that end up with the digit equal to zero is no different than the proportion of one-boxer simulations that end up with the digit equal to zero (both are approximately .1). But the proportion of the one-boxer simulations that end up with an actual $1M is much higher (.9) than the proportion of two-boxer simulations that end up with an actual $1M (.1).
But the proportion of two-boxers that saw $1M in the box that end up
with their digit being 0 and
with the $1M
is even higher (1). I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?
Not if a counterfactual consequence of two-boxing is that the large box (probably) would be empty (even though in fact it is not empty, as you can already see).
That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)
Don’t worry; I don’t want to do that :). If I recall the original transparent-boxes problem correctly, I agree with you on what to do in that case.
Just to check my memory, in the original problem, there are two transparent boxes, A and B. You see that A contains $1M and B contains $1000. You know that B necessarily contains $1000, but A would have contained $1M iff it were the case that you will decide to take only A. Otherwise, A would have been empty. The conclusion (with which I agree) is that you should take only A. Is that right? (If I’m misremembering something crucial, is there a link to the full description online?) [ETA: I see that you added a description to your post. My recollection above seems to be consistent with your description.]
In the original problem, if we use the “many choosers” heuristic, there are no choosers who two-box and yet who get the $1M. Therefore, you cannot “choose to be” one of them. This is why two-boxing should have no appeal to you.
In contrast, in your new problem, there are two-boxers who get the $1M and who get their E module to output True. So you can “choose to be” one of them, no? And since they’re the biggest winners, that’s what you should do, isn’t it?
Have some Omega thought experiments been one shot, never to be repeated type deals or is my memory incorrect?
Yes I wasn’t thinking through what would happen when the ith digit wasn’t 0. You can’t switch to one boxing in that case because you don’t know when that would be, or rather when you see an empty box you are forced to do the same as when you see a full box due to the way the game is set up.
Yes, and that’s the intent in this example as well. Still, it can be useful to look at the expected distribution of outcomes over a large enough number of trials that have the same structure, in order to infer the (counterfactual) probabilities that apply to a single trial.
Yes, they have. And most can be formulated as such as long as p(Omega is honest) is given as ‘high’ somewhere.