A true Omega needs to make both P(box B full | take one box) and P(box B empty | take both boxes) high. The proposed scheme ensures that P(box B full | habitual one-boxer) and P(box B empty | habitual two-boxer) are high, which is not quite the same.
If I understand correctly the distinction you’re making between habitual one boxer and take one box the first kind would be about the past player history and the other one about the future. If so I guess you are right. I’m indeed using the past to make my prediction, as using the future is beyond my reach.
But I believe you’re missing the point. My program is not an iterated Newcomb’s Problem because Omega does not perform any prediction along the way. It will only perform one prediction. And that will be for the last game and the human won’t be warned. It does not care at all about the reputation of the player, but only on it’s acts in situations where he (the human player) can’t know if he is playing of not.
But another point of view is possible, and that is what comes to mind when you run the program: it is coercing the player to be either a one boxer or a two boxer if he wan’t to play at all. Any two-boxing and the player will have to spend a very long time one-boxing to reach the state when he is again seen as a one boxer. As it is written, the program is likely (to the chosen accuracy level) to make it’s prediction while the player is struggling to be a one boxer.
As a human player what comes through my mind while running my program is ok: I want to get a million dollars, henceforth I have to become a one boxer.
If I understand correctly the distinction you’re making between habitual one boxer and take one box the first kind would be about the past player history and the other one about the future. If so I guess you are right. I’m indeed using the past to make my prediction, as using the future is beyond my reach.
But I believe you’re missing the point. My program is not an iterated Newcomb’s Problem because Omega does not perform any prediction along the way. It will only perform one prediction. And that will be for the last game and the human won’t be warned. It does not care at all about the reputation of the player, but only on it’s acts in situations where he (the human player) can’t know if he is playing of not.
But another point of view is possible, and that is what comes to mind when you run the program: it is coercing the player to be either a one boxer or a two boxer if he wan’t to play at all. Any two-boxing and the player will have to spend a very long time one-boxing to reach the state when he is again seen as a one boxer. As it is written, the program is likely (to the chosen accuracy level) to make it’s prediction while the player is struggling to be a one boxer.
As a human player what comes through my mind while running my program is ok: I want to get a million dollars, henceforth I have to become a one boxer.