If you can correctly predict that something odd happens, that’s equivalent to something odd happening, with your prediction mechanism if not the world.
You can only correctly predict that something odd happens if you know you will still one-box, and you can only know you will still one-box if you are still going to one-box. As long as you model the problem as you still having the choice to two-box you haven’t observed anything odd happening.
Which is why I didn’t specify “identical.”
All yous observing an empty box behave the the same unless there is something else differentiating them, which (in the scenario considered) there is not unless you incorporate sufficient randomness in your decision making process, which you have no reason to want to do. The only way for the counterfactual you in case of an empty box that determines the state of the box to one-box so that the real you can get the $1,000,000 is for the real you to also one-box in a hypothetical encounter with the empty box. The only way you could actually encounter the empty box is if you two-box after encountering it, which you should not want to do.
I’m not assuming the actual existence of more than one you. Just the existence of at least one real you that matters. If you break your precommitment and two-box just because you see an empty box the real you is losing out on the $1,000,000. It doesn’t matter how you reconcile the apparent existence of the situation, the apparent emptiness of the box, Omega’s infailability and your precommitment as long as that reconciliation doesn’t lead to breaking the precommitment, you can worry about that afterwards (personally I’m leaning towards assuming you don’t exist and are just a couterfactual).
As long as you model the problem as you still having the choice to two-box you haven’t observed anything odd happening.
You don’t count a contradiction (perfect predictor being wrong) as “odd?”
[rest of your post]
Oh, okay. So when you said “imagine box 1 is empty,” you didn’t actually mean to treat box 1 as empty—that wasn’t supposed to be “real,” and you agree that if it was real, the logic of the problem would compel you to take box 2. Rather than treating it like a normal hypothetical, your intent is to precommit to one-boxing even if box 1 is empty so that it won’t be. Does that sound right?
You don’t count a contradiction (perfect predictor being wrong) as “odd?”
As I argued above you haven’t actually observed a perfect predictor being wrong at that point.
Oh, okay. So when you said “imagine box 1 is empty,” you didn’t actually mean to treat box 1 as empty—that wasn’t supposed to be “real,”
Not quite, as repeatedly said it doesn’t matter what you think (or even if you think anything), just that the mere reality of the situation should not change how you act. If the only way you can manage that is to pretend it’s not real then so be it.
and you agree that if it was real, the logic of the problem would compel you to take box 2.
No, it doesn’t. The logic of the problem merely predicts that will happen because you are a two-boxer only pretending to be a one-boxer. You still can (and should) choose to one-box, and there is (as stated) no outside force compelling you. You shouldn’t be very surprised when you do find an outside force compelling you, but it won’t be the logic of the problem, unless you let it (and you shouldn’t).
Rather than treating it like a normal hypothetical, your intent is to precommit to one-boxing even if box 1 is empty so that it won’t be.
If you want to put it that way. Anyone who wants the $1,000,000 in a transparent box Newcomb problem has to be prepared to do the same.
No, it doesn’t. The logic of the problem merely predicts that will happen because you are a two-boxer only pretending to be a one-boxer. You still can (and should) choose to one-box
See, this is what I find unusual. You predict that you will one-box and you also predict that this would cause a contradiction with the assumptions of the problem. This is like saying “I predict I will prove that 2=3 at noon tomorrow,” and yet you don’t see the oddness. Again, the fact that a proof exists (of something like “this formulation of newcomb’s problem with transparent boxes is inconsistent”) is as good as the proof itself.
No. Not at all. The only reason we are even having this discussion is because of the highly defective way the human brain usually models choice, among other things inappropriately equating capability to make a certain choice with a material, ill-defined possibility of that choice happening. Those are two entirely different things, even though I’m afraid this all just sounds like nonsense to you.
Not just ability, you’re actually expecting to make that choice
I never said that! I said repeatedly that it doesn’t matter what you you think (inside hypothetical case), only that you one-box. Sure, if you absolutely have to make predictions, and if assuming that Omega will turn out wrong does not change your resolve to one-box that’s one possible way to deal with that problem, but I already said that personally I’m leaning towards thinking that none of that is actually happening, and as long as thinking you are going to fail and two-box doesn’t impede you one-boxing that works, too (as implied above). Or anything else that doesn’t stop you from one-boxing.
So your tentative solution is to break the problem in the same way as ata, by saying “well, what the problem really means is that you see someone who looks just like Omega pose you the problem, but it might be a simulation.” (Note that Omega cannot simulate Omega for this to work, so the problem is genuinely different. If Omega could simulate Omega, it would have no need to simulate you with any uncertainty).
Let’s see if I understand your more general statement—in this formulation of Newcomb’s problem, it would be better if you picked box 1 even when it was empty. Therefore you should do something (anything) so that you will pick box 1 even if it is empty. Am I getting closer to what you think?
So your tentative solution is to break the problem in the same way as ata, by saying “well, what the problem really means is that you see someone who looks just like Omega pose you the problem, but it might be a simulation.”
No, simulation is just one of the possibilities I listed way up-thread:
(e. g. Omega is wrong for once, you are hallucinating, you are a simulation, you exist in the place the truth value of counterfactuals is located, you are a free-floating counterfactual and don’t actually exist, etc)
But it’s not my favored conclusion, because it leads to doing silly things like holding off deciding so you are simulated for a longer time and exist longer, as you suggested. My favored one is the last one, that you don’t exist, at all, not even inside a simulation or a Tegmark IV type of thing. After one-boxing you’d (hypothetically) switch to the Tegmark IV version of course (or Omega just being wrong, nothing differentiating those).
Let’s see if I understand your more general statement—in this formulation of Newcomb’s problem, it would be better if you picked box 1 even when it was empty. Therefore you should do something (anything) so that you will pick box 1 even if it is empty. Am I getting closer to what you think?
I don’t specifically disagree with anything in particular here, but you sound as if you would draw conclusions from that I wouldn’t draw.
Well, the possibilities listed up-thread other than “you don’t exist” make the problem no longer exactly Newcomb’s problem, unless you two-box. So I like your favorite, although I’m probably thinking of a stricter version of “don’t exist” that makes it more nonsensical to talk about “what would you (who don’t exist) do?”
E.g. if carrots didn’t exist, what would the carrots that don’t exist taste like? :D
You don’t expect anything contradictory to actually happen. Because you would one-box no matter what you see, you will never end up seeing an empty box.
You can only correctly predict that something odd happens if you know you will still one-box, and you can only know you will still one-box if you are still going to one-box. As long as you model the problem as you still having the choice to two-box you haven’t observed anything odd happening.
All yous observing an empty box behave the the same unless there is something else differentiating them, which (in the scenario considered) there is not unless you incorporate sufficient randomness in your decision making process, which you have no reason to want to do. The only way for the counterfactual you in case of an empty box that determines the state of the box to one-box so that the real you can get the $1,000,000 is for the real you to also one-box in a hypothetical encounter with the empty box. The only way you could actually encounter the empty box is if you two-box after encountering it, which you should not want to do.
I’m not assuming the actual existence of more than one you. Just the existence of at least one real you that matters. If you break your precommitment and two-box just because you see an empty box the real you is losing out on the $1,000,000. It doesn’t matter how you reconcile the apparent existence of the situation, the apparent emptiness of the box, Omega’s infailability and your precommitment as long as that reconciliation doesn’t lead to breaking the precommitment, you can worry about that afterwards (personally I’m leaning towards assuming you don’t exist and are just a couterfactual).
You don’t count a contradiction (perfect predictor being wrong) as “odd?”
Oh, okay. So when you said “imagine box 1 is empty,” you didn’t actually mean to treat box 1 as empty—that wasn’t supposed to be “real,” and you agree that if it was real, the logic of the problem would compel you to take box 2. Rather than treating it like a normal hypothetical, your intent is to precommit to one-boxing even if box 1 is empty so that it won’t be. Does that sound right?
As I argued above you haven’t actually observed a perfect predictor being wrong at that point.
Not quite, as repeatedly said it doesn’t matter what you think (or even if you think anything), just that the mere reality of the situation should not change how you act. If the only way you can manage that is to pretend it’s not real then so be it.
No, it doesn’t. The logic of the problem merely predicts that will happen because you are a two-boxer only pretending to be a one-boxer. You still can (and should) choose to one-box, and there is (as stated) no outside force compelling you. You shouldn’t be very surprised when you do find an outside force compelling you, but it won’t be the logic of the problem, unless you let it (and you shouldn’t).
If you want to put it that way. Anyone who wants the $1,000,000 in a transparent box Newcomb problem has to be prepared to do the same.
See, this is what I find unusual. You predict that you will one-box and you also predict that this would cause a contradiction with the assumptions of the problem. This is like saying “I predict I will prove that 2=3 at noon tomorrow,” and yet you don’t see the oddness. Again, the fact that a proof exists (of something like “this formulation of newcomb’s problem with transparent boxes is inconsistent”) is as good as the proof itself.
No. Not at all. The only reason we are even having this discussion is because of the highly defective way the human brain usually models choice, among other things inappropriately equating capability to make a certain choice with a material, ill-defined possibility of that choice happening. Those are two entirely different things, even though I’m afraid this all just sounds like nonsense to you.
Not just ability, you’re actually expecting to make that choice, which I most certainly associate with calculating a probability.
I never said that! I said repeatedly that it doesn’t matter what you you think (inside hypothetical case), only that you one-box. Sure, if you absolutely have to make predictions, and if assuming that Omega will turn out wrong does not change your resolve to one-box that’s one possible way to deal with that problem, but I already said that personally I’m leaning towards thinking that none of that is actually happening, and as long as thinking you are going to fail and two-box doesn’t impede you one-boxing that works, too (as implied above). Or anything else that doesn’t stop you from one-boxing.
So your tentative solution is to break the problem in the same way as ata, by saying “well, what the problem really means is that you see someone who looks just like Omega pose you the problem, but it might be a simulation.” (Note that Omega cannot simulate Omega for this to work, so the problem is genuinely different. If Omega could simulate Omega, it would have no need to simulate you with any uncertainty).
Let’s see if I understand your more general statement—in this formulation of Newcomb’s problem, it would be better if you picked box 1 even when it was empty. Therefore you should do something (anything) so that you will pick box 1 even if it is empty. Am I getting closer to what you think?
No, simulation is just one of the possibilities I listed way up-thread:
But it’s not my favored conclusion, because it leads to doing silly things like holding off deciding so you are simulated for a longer time and exist longer, as you suggested. My favored one is the last one, that you don’t exist, at all, not even inside a simulation or a Tegmark IV type of thing. After one-boxing you’d (hypothetically) switch to the Tegmark IV version of course (or Omega just being wrong, nothing differentiating those).
I don’t specifically disagree with anything in particular here, but you sound as if you would draw conclusions from that I wouldn’t draw.
Well, the possibilities listed up-thread other than “you don’t exist” make the problem no longer exactly Newcomb’s problem, unless you two-box. So I like your favorite, although I’m probably thinking of a stricter version of “don’t exist” that makes it more nonsensical to talk about “what would you (who don’t exist) do?”
E.g. if carrots didn’t exist, what would the carrots that don’t exist taste like? :D
You don’t expect anything contradictory to actually happen. Because you would one-box no matter what you see, you will never end up seeing an empty box.