Part of the definition of Newcomb’s problem is that Omega is never wrong. And so one-boxing with transparent boxes creates a contradiction, because you’re doing what Omega thought you wouldn’t! This isn’t a situation where you should go “oh, I guess it’s no biggie that the infallible agent was wrong,” this is a situation where you should go “holy shit, the supposedly infallible agent was wrong!”
One-boxing with transparent boxes should cause you to believe that you are the result of Omega simulating what you would do if the box were empty, in which case you should take the empty box and expect to pop out of existence a moment later (content in the knowledge that the real(er) you will be getting a million dollars soon).
(Well, you shouldn’t literally expect to pop out of existence (I’m not sure what that would be like), but that is what the situation would look like from the outside. In order for you to win on the actual problem, you will have to be the sort of person who would take the one empty box, and in order for Omega to correctly determine that, some amount of you will have to actually do that. It’s possible that Omega doesn’t need to simulate you as a full conscious being in order to correctly predict your decision, but it’ll at least need to simulate you as a decision algorithm that thinks it’s a full conscious being.)
Hm, interesting take on it. sounds reasonable to me. But at the expense of making the whole problem sound more ridiculous :D—arguments dependent on a particular nature of reality highlight the unphysical-ness of Omega.
Also, it tells me an interesting strategy if presented with box 1 empty—delay deciding as long as possible!
One-boxing with transparent boxes should cause you to believe that you are the result of Omega simulating what you would do if the box were empty, in which case you should take the empty box and expect to pop out of existence a moment later (content in the knowledge that the real(er) you will be getting a million dollars soon).
(Well, you shouldn’t literally expect to pop out of existence (I’m not sure what that would be like), but that is what the situation would look like from the outside. In order for you to win on the actual problem, you will have to be the sort of person who would take the one empty box, and in order for Omega to correctly determine that, some amount of you will have to actually do that. It’s possible that Omega doesn’t need to simulate you as a full conscious being in order to correctly predict your decision, but it’ll at least need to simulate you as a decision algorithm that thinks it’s a full conscious being.)
Hm, interesting take on it. sounds reasonable to me. But at the expense of making the whole problem sound more ridiculous :D—arguments dependent on a particular nature of reality highlight the unphysical-ness of Omega.
Also, it tells me an interesting strategy if presented with box 1 empty—delay deciding as long as possible!