I liked the story, but I think it misses some of how newcomb’s problem works (at least as I understand it). Also, welcome to LessWrong! I think this is a pretty good first post.
So, this is not a problem of decision theory, but a problem with Omega’s predictive capabilities in this story (maybe she was always right before this, but she wasn’t here).
In the case where you find yourself holding the $1,000,000 and the $1000 are still available, sure, you can pick them up. That only happens if either Omega failed to predict what you will do, or if you somehow set things up such that you couldn’t, or had to pay a big price, to break your precommitment.
If Omega really had perfect predictive capabilities in this story this is what would happen
-
Irene promptly walks up to the opaque box and opens it, revealing… Nothing.
“What?!” Irene exclaimed, “But I didn’t mean to open the transparent box. I precommited to not doing it!”
“Maybe Omega knew better than you.” Rachel said, “Anyway, now that you’re only left with the $1000 in the transparent box, are you going to take them?”
“What? no! Then I would break my commitment and prove Omega right!”
“You can either do that, or walk out with a $1000. You’re not getting anything from not taking it, I’m sure if I threatened you to take it you would. That’s the rational decision.”
Irene sighed, “Good point, I guess.” And she opened the box and walked out with a $1000.
-
Also it assumes a non iterated game, and by not iterated I don’t just mean she doesn’t play against Omega again, but doesn’t play against anyone else again, otherwise this becomes part of her reputation and thus is no longer “free”.
In the case where you find yourself holding the $1,000,000 and the $1000 are still available, sure, you can pick them up. That only happens if either Omega failed to predict what you will do, or if you somehow set things up such that you couldn’t, or had to pay a big price, to break your precommitment.
I don’t think that’s true. The traditional Newcomb’s problem could use the exact setup that I used here, the only difference would be that either the opaque box is empty, or Irene never opens the transparent box. The idea that the $1000 is always “available” to the player is central to Newcomb’s problem.
In my comment “that” in “That only happens if” referred to you taking the $1,000, not to them being available. So to clarify:
If we assume that Omega’s predictions are perfect, then you only find $1,000,000 in the box in cases where for some reason you don’t also take the $1,000
Maybe you have some beliefs about why you shouldn’t do it
Maybe it’s against your honor to do it
Maybe you’re programmed not to do it
Maybe before you met Omega you gave a friend $2,000 and told him to give them back to you only if you don’t take the $1,000, and otherwise burn them.
If you find yourself going out with the contents of both boxes, either you’re in a simulation or Omega was wrong.
If Omega is wrong (and it’s a one shot, and you know you’re not in a simulation) then yeah, you have no reason not to take the $1,000 too. But the less accurate Omega is, the less the problem is newcomblike.
I think you’re missing my point. After the $1,000,000 has been taken, Irene doesn’t suddenly lose her free will. She’s perfectly capable of taking the $1000; she’s just decided not to.
You seem to think I’m making some claim like “one-boxing is irrational” or “Newcomb’s problem is impossible”, which is not at all what I’m doing. I’m trying to demonstrate that the idea of “rational agents just do what maximizes their utility and don’t worry about having to have a consistent underlying decision theory” appears to result in a contradiction as soon as Irene’s decision has been made.
I understood your point. What I’m saying is that Irene is Indeed capable of also taking the $1000, but if omega isn’t wrong, she only gets the million in cases where for some reason she doesn’t (and I gave a few examples).
I think your scenario is just too narrow. Sure, if Omega is wrong, and it’s not a simulation, and it’s a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren’t true, then you better find some reason or way not to take those 1000, or you’ll never see the million in the first place, or you’ll them in reality, or you’ll never see them in the future.
How can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).
I liked the story, but I think it misses some of how newcomb’s problem works (at least as I understand it). Also, welcome to LessWrong! I think this is a pretty good first post.
So, this is not a problem of decision theory, but a problem with Omega’s predictive capabilities in this story (maybe she was always right before this, but she wasn’t here).
In the case where you find yourself holding the $1,000,000 and the $1000 are still available, sure, you can pick them up. That only happens if either Omega failed to predict what you will do, or if you somehow set things up such that you couldn’t, or had to pay a big price, to break your precommitment.
If Omega really had perfect predictive capabilities in this story this is what would happen
-
Irene promptly walks up to the opaque box and opens it, revealing… Nothing.
“What?!” Irene exclaimed, “But I didn’t mean to open the transparent box. I precommited to not doing it!”
“Maybe Omega knew better than you.” Rachel said, “Anyway, now that you’re only left with the $1000 in the transparent box, are you going to take them?”
“What? no! Then I would break my commitment and prove Omega right!”
“You can either do that, or walk out with a $1000. You’re not getting anything from not taking it, I’m sure if I threatened you to take it you would. That’s the rational decision.”
Irene sighed, “Good point, I guess.” And she opened the box and walked out with a $1000.
-
Also it assumes a non iterated game, and by not iterated I don’t just mean she doesn’t play against Omega again, but doesn’t play against anyone else again, otherwise this becomes part of her reputation and thus is no longer “free”.
I don’t think that’s true. The traditional Newcomb’s problem could use the exact setup that I used here, the only difference would be that either the opaque box is empty, or Irene never opens the transparent box. The idea that the $1000 is always “available” to the player is central to Newcomb’s problem.
In my comment “that” in “That only happens if” referred to you taking the $1,000, not to them being available. So to clarify:
If we assume that Omega’s predictions are perfect, then you only find $1,000,000 in the box in cases where for some reason you don’t also take the $1,000
Maybe you have some beliefs about why you shouldn’t do it
Maybe it’s against your honor to do it
Maybe you’re programmed not to do it
Maybe before you met Omega you gave a friend $2,000 and told him to give them back to you only if you don’t take the $1,000, and otherwise burn them.
If you find yourself going out with the contents of both boxes, either you’re in a simulation or Omega was wrong.
If Omega is wrong (and it’s a one shot, and you know you’re not in a simulation) then yeah, you have no reason not to take the $1,000 too. But the less accurate Omega is, the less the problem is newcomblike.
I think you’re missing my point. After the $1,000,000 has been taken, Irene doesn’t suddenly lose her free will. She’s perfectly capable of taking the $1000; she’s just decided not to.
You seem to think I’m making some claim like “one-boxing is irrational” or “Newcomb’s problem is impossible”, which is not at all what I’m doing. I’m trying to demonstrate that the idea of “rational agents just do what maximizes their utility and don’t worry about having to have a consistent underlying decision theory” appears to result in a contradiction as soon as Irene’s decision has been made.
I understood your point. What I’m saying is that Irene is Indeed capable of also taking the $1000, but if omega isn’t wrong, she only gets the million in cases where for some reason she doesn’t (and I gave a few examples).
I think your scenario is just too narrow. Sure, if Omega is wrong, and it’s not a simulation, and it’s a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren’t true, then you better find some reason or way not to take those 1000, or you’ll never see the million in the first place, or you’ll them in reality, or you’ll never see them in the future.
Put more simply, two-boxing is the right answer in the cases where Omega is wrong.
How can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).