This is an old thread, but I can’t imagine the problem going away anytime soon, so let me throw some chum into the waters;
Omega says;
“I predict you’re a one boxer. I can understand that. You’ve got really good reasons for picking that, and I know you would never change your mind. So I’m going to give you a slightly different version of the problem;
I’ve decided to make both boxes transparent. Oh and by the way, my predictions aren’t 100% correct.”
Question: Do you make any different decisions in the transparent box case? If so, what was there about your original argument that is different in the transparent box case?
If you’re really a one boxer, that means you can look at an empty box and still pick it.
I was surprised that the rec.puzzles FAQ answer to this doesn’t appear in the replies. (Maybe it’s here and I just missed it.)
While you are given that P(do X | predict X) is high, it is not given
that P(predict X | do X) is high. Indeed, specifying that P(predict X
| do X) is high would be equivalent to specifying that the being could
use magic (or reverse causality) to fill the boxes. Therefore, the
expected gain from either action cannot be determined from the
information given.
In other words, we can’t tell if (how much) our actions determine the outcome, so we can’t make a rational decision.
This is an old thread, but I can’t imagine the problem going away anytime soon, so let me throw some chum into the waters;
Omega says; “I predict you’re a one boxer. I can understand that. You’ve got really good reasons for picking that, and I know you would never change your mind. So I’m going to give you a slightly different version of the problem; I’ve decided to make both boxes transparent. Oh and by the way, my predictions aren’t 100% correct.”
Question: Do you make any different decisions in the transparent box case?
If so, what was there about your original argument that is different in the transparent box case?
If you’re really a one boxer, that means you can look at an empty box and still pick it.
I was surprised that the rec.puzzles FAQ answer to this doesn’t appear in the replies. (Maybe it’s here and I just missed it.)
In other words, we can’t tell if (how much) our actions determine the outcome, so we can’t make a rational decision.