In the case where you find yourself holding the $1,000,000 and the $1000 are still available, sure, you can pick them up. That only happens if either Omega failed to predict what you will do, or if you somehow set things up such that you couldn’t, or had to pay a big price, to break your precommitment.
I don’t think that’s true. The traditional Newcomb’s problem could use the exact setup that I used here, the only difference would be that either the opaque box is empty, or Irene never opens the transparent box. The idea that the $1000 is always “available” to the player is central to Newcomb’s problem.
In my comment “that” in “That only happens if” referred to you taking the $1,000, not to them being available. So to clarify:
If we assume that Omega’s predictions are perfect, then you only find $1,000,000 in the box in cases where for some reason you don’t also take the $1,000
Maybe you have some beliefs about why you shouldn’t do it
Maybe it’s against your honor to do it
Maybe you’re programmed not to do it
Maybe before you met Omega you gave a friend $2,000 and told him to give them back to you only if you don’t take the $1,000, and otherwise burn them.
If you find yourself going out with the contents of both boxes, either you’re in a simulation or Omega was wrong.
If Omega is wrong (and it’s a one shot, and you know you’re not in a simulation) then yeah, you have no reason not to take the $1,000 too. But the less accurate Omega is, the less the problem is newcomblike.
I think you’re missing my point. After the $1,000,000 has been taken, Irene doesn’t suddenly lose her free will. She’s perfectly capable of taking the $1000; she’s just decided not to.
You seem to think I’m making some claim like “one-boxing is irrational” or “Newcomb’s problem is impossible”, which is not at all what I’m doing. I’m trying to demonstrate that the idea of “rational agents just do what maximizes their utility and don’t worry about having to have a consistent underlying decision theory” appears to result in a contradiction as soon as Irene’s decision has been made.
I understood your point. What I’m saying is that Irene is Indeed capable of also taking the $1000, but if omega isn’t wrong, she only gets the million in cases where for some reason she doesn’t (and I gave a few examples).
I think your scenario is just too narrow. Sure, if Omega is wrong, and it’s not a simulation, and it’s a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren’t true, then you better find some reason or way not to take those 1000, or you’ll never see the million in the first place, or you’ll them in reality, or you’ll never see them in the future.
How can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).
I don’t think that’s true. The traditional Newcomb’s problem could use the exact setup that I used here, the only difference would be that either the opaque box is empty, or Irene never opens the transparent box. The idea that the $1000 is always “available” to the player is central to Newcomb’s problem.
In my comment “that” in “That only happens if” referred to you taking the $1,000, not to them being available. So to clarify:
If we assume that Omega’s predictions are perfect, then you only find $1,000,000 in the box in cases where for some reason you don’t also take the $1,000
Maybe you have some beliefs about why you shouldn’t do it
Maybe it’s against your honor to do it
Maybe you’re programmed not to do it
Maybe before you met Omega you gave a friend $2,000 and told him to give them back to you only if you don’t take the $1,000, and otherwise burn them.
If you find yourself going out with the contents of both boxes, either you’re in a simulation or Omega was wrong.
If Omega is wrong (and it’s a one shot, and you know you’re not in a simulation) then yeah, you have no reason not to take the $1,000 too. But the less accurate Omega is, the less the problem is newcomblike.
I think you’re missing my point. After the $1,000,000 has been taken, Irene doesn’t suddenly lose her free will. She’s perfectly capable of taking the $1000; she’s just decided not to.
You seem to think I’m making some claim like “one-boxing is irrational” or “Newcomb’s problem is impossible”, which is not at all what I’m doing. I’m trying to demonstrate that the idea of “rational agents just do what maximizes their utility and don’t worry about having to have a consistent underlying decision theory” appears to result in a contradiction as soon as Irene’s decision has been made.
I understood your point. What I’m saying is that Irene is Indeed capable of also taking the $1000, but if omega isn’t wrong, she only gets the million in cases where for some reason she doesn’t (and I gave a few examples).
I think your scenario is just too narrow. Sure, if Omega is wrong, and it’s not a simulation, and it’s a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren’t true, then you better find some reason or way not to take those 1000, or you’ll never see the million in the first place, or you’ll them in reality, or you’ll never see them in the future.
Put more simply, two-boxing is the right answer in the cases where Omega is wrong.
How can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).