I can write a program working as the Newcomb’s problem is described if I go for the “unperfect predictor” version where the being is merely right “most of the time”. A way to do it could be to let player run a number of practice (or calibration) games, then at a time chosen by the guesser make that game “real”. The calibration plays would simulate the supernatural player minute observation of the player behavior, what can indeed not easily be done.
Are you claiming to be able to write a program that can win, playing Omega in an iterated Newcomb’s Problem contest, basing its predictions only on the record of the outcomes of past training rounds? How high a prediction accuracy do you expect to be able to get?
As proposed, the idea is to run the program in “test mode”. To simulate the super-being Omenga we give it the opportunity to decide when game stops being a simulation (predictor calibration) and start being the “real game”. To be fair, this change (or the rules governing it) will be communicated to some external judge before the actual “real play”. But it will not be communicated to player (or obviously it would break any calibration accuracy). A possible rule could be to start the real game when some fixed accuracy is reached (something like prediction is right 99% of the time), or it could also be a fixed number of calibration games.
Writing such predictor is as easy as it can be : just a simple loop waiting for the fixed accuration wich is reached when either one-box or two-box is above expected threshold. Obviously it player is random, that could be quite long (but even with random it should happen sometime). But with a rational player that should be fast enough. I’m not sure that a random player could qualify as rational, anyways.
Doing that Omega can be as accurate as wished.
It still is not a perfect predictor, the player could still outguess Omega and predict at wich move the desired accuracy will be reached, but it’s good enough for me (and the Omega player could add some randomness on his side tu avoid guessers).
I see no reason why the program describe above could not be seen as an acceptable Omega following Newcomb’s problem rules.
Not communicating the actual real game is just here to avoid cheaters and enforce that the actual experiment will be done in the same environment sa the calibration.
I wonder if anyone would seriously choose to two-box any time with the above rules.
To be fair, this change (or the rules governing it) will be communicated to some external judge before the actual “real play”. But it will not be communicated to player (or obviously it would break any calibration accuracy).
But then, the player never knows when they are faced with Omega, the successful predictor, which is an essential part of Newcomb’s problem.
A possible rule could be to start the real game when some fixed accuracy is reached (something like prediction is right 99% of the time), or it could also be a fixed number of calibration games.
Writing such predictor is as easy as it can be : just a simple loop waiting for the fixed accuration wich is reached when either one-box or two-box is above expected threshold. Obviously it player is random, that could be quite long (but even with random it should happen sometime).
You expect to predict even a random choice with 99% accuracy? Am I misunderstanding something? Rock-scissors-paper programs that try to detect the non-randomness of human choices do succeed against most people, but only a little better than chance, not with 99% accuracy. Against a truly random player they do not succeed at all.
But iterated Newcomb is different from original Newcomb, just as iterated PD is different from plain PD. Now, I don’t see anything wrong with studying related problems, but you yourself said that studying a different but related problem does not touch the original.
I don’t know if you have seen it, but I have posted an actual program playing Newcomb’s game. As far as I understand what I have done, this is not an Iterated Newcomb’s problem, but a single shot one. You should also notice that the calibration phase does not returns output to the player (well, I added some showing of reached accuracy, but this is not necessary).
If I didn’t overviewed some detail, the predictor accuracy is currently tuned at above 90% but any level of accuracy is reachable.
As I explained yesterday, the key point was to run some “calibration” phase before running the actual game. To make the calibration usefull I have to blur the limit between calibration and actual game or the player won’t behave as in real game while in calibration phase. Hence the program need to run a number of “maybe real” games before playing the true one. For the reason explained above we also cannot say to the user he his playing the real and last game (or he would known if he is playing a calibration game or a real one and the calibration would be useless).
But it is very clear reading source code that if the (human) player was some kind of supernatural being he could defeat the program by choosing two boxes while the prediction is one-box. It just will be a very unlikely event to the desired accuracy level.
I pretend this is a true unmodified Newcomb’s problem, all the calibration process is here only to make actually true the preassertion of the Newcomb’s problem : prediction accuracy of Omega (and verifiably so for the human player : he can read the source code and convince himself or even run the program and understand why prediction will be accurate).
As I know it Necomb’s problem does not impose the way the initial preassertion of accuracy is reached. As programming goes, I’m merely composing two functions, the first one ensuring the entry preassertion of good prediction accuracy is true.
I see a problem with the proposed method. Your program learns how often, on average, its opponent one-boxes or two-boxes. If I (as Omega) learn that someone is a one-boxer, then I can predict that they will one-box next time, put money in box B, and be proved right. But then, in an iterated game, if the one-boxer learns that I am not predicting his decision in the individual case, but have made a general prediction once and for all and thereafter always filling box B, then he can with impunity take both boxes and prove my prediction wrong.
A true Omega needs to make both P(box B full | take one box) and P(box B empty | take both boxes) high. The proposed scheme ensures that P(box B full | habitual one-boxer) and P(box B empty | habitual two-boxer) are high, which is not quite the same.
Similarly, suppose I convince Eliezer that I’m Omega. He has publicly avowed one-boxing on Newcomb, so I can skip the learning phase, fill box B, and be proved right. But if, for some reason, he suspects that I’m not a superintelligent superbeing with superpowers of prediction, and in a series of games, experiments with two-boxing, I will be exposed as an impostor.
Iterated Newcomb played between programs given access to each other’s source code would be an interesting challenge. I assume Omega doesn’t care about the money, but plays for the gratification of correctly predicting the other player’s choice. The other player is playing for the money.
A simpler, zero-sum game also suggests itself to me. This is more like Rock-Paper-Scissors than Newcomb, but again the point is to play using knowledge of the other person’s code. Each player chooses 0 or 1. Player A wins if the choices are the same, player B wins if they are different.
(This might look as if A is trying to predict B and B is trying to avoid being predicted, but the game is actually symmetric, both players doing both of these things. Swap the labels on B’s choices and B wins on equality and A on inequality.)
In classical game theory, the optimal strategy is to toss a coin, and the expected payoff is zero. The challenge is to do better against real opponents.
A true Omega needs to make both P(box B full | take one box) and P(box B empty | take both boxes) high. The proposed scheme ensures that P(box B full | habitual one-boxer) and P(box B empty | habitual two-boxer) are high, which is not quite the same.
If I understand correctly the distinction you’re making between habitual one boxer and take one box the first kind would be about the past player history and the other one about the future. If so I guess you are right. I’m indeed using the past to make my prediction, as using the future is beyond my reach.
But I believe you’re missing the point. My program is not an iterated Newcomb’s Problem because Omega does not perform any prediction along the way. It will only perform one prediction. And that will be for the last game and the human won’t be warned. It does not care at all about the reputation of the player, but only on it’s acts in situations where he (the human player) can’t know if he is playing of not.
But another point of view is possible, and that is what comes to mind when you run the program: it is coercing the player to be either a one boxer or a two boxer if he wan’t to play at all. Any two-boxing and the player will have to spend a very long time one-boxing to reach the state when he is again seen as a one boxer. As it is written, the program is likely (to the chosen accuracy level) to make it’s prediction while the player is struggling to be a one boxer.
As a human player what comes through my mind while running my program is ok: I want to get a million dollars, henceforth I have to become a one boxer.
If my program runs as long as wished accuracy is nor reached it can reach any accuracy. Truly random numbers are also expected to deviate toward extremes sometimes in the long run (if they do not behave like that they are not random). As it is very rare events, against random players the expected accuracy would certainly never be reached in a human life.
Why I claim is the “calibration phase” described above takes place before Newcomb’s problem. When the actual game starts the situation described in Newcomb’s problem is exactly what is reached. THe description of the calibration phase could even be provided to the player to convince him Omega prediction will be accurate. At least it is convincing for me and in such a setting I would certaily believe Omega can predict my behavior. In a way you could the my calibration phase as a way for Omega to wait for the player to be ready to play truly instead of trying to cheat. As trying to cheat will only result in delaying the actual play.
OK. It may be another problem, what I did is merely replacing a perfectly accurate being with an infinitely patient one… but this one is easy to program.
I posted a possible program doing what I describe in another comment. The trick as expected is that it’s easier to change the human player understanding of the nature of omega to reach the desired predictability. In other words : you just remove human free will (and running my program the player learn very quickly that is in his best interrest), then you play. What is interresting is that the only way compatible with Newcomb’s problem description to remove his free will is to make it a one-boxer. The incentive to make it a two-boxer would be to exhibit a bad predictor and that’s not compatible with Newcomb’s problem.
Are you claiming to be able to write a program that can win, playing Omega in an iterated Newcomb’s Problem contest, basing its predictions only on the record of the outcomes of past training rounds? How high a prediction accuracy do you expect to be able to get?
As proposed, the idea is to run the program in “test mode”. To simulate the super-being Omenga we give it the opportunity to decide when game stops being a simulation (predictor calibration) and start being the “real game”. To be fair, this change (or the rules governing it) will be communicated to some external judge before the actual “real play”. But it will not be communicated to player (or obviously it would break any calibration accuracy). A possible rule could be to start the real game when some fixed accuracy is reached (something like prediction is right 99% of the time), or it could also be a fixed number of calibration games.
Writing such predictor is as easy as it can be : just a simple loop waiting for the fixed accuration wich is reached when either one-box or two-box is above expected threshold. Obviously it player is random, that could be quite long (but even with random it should happen sometime). But with a rational player that should be fast enough. I’m not sure that a random player could qualify as rational, anyways.
Doing that Omega can be as accurate as wished.
It still is not a perfect predictor, the player could still outguess Omega and predict at wich move the desired accuracy will be reached, but it’s good enough for me (and the Omega player could add some randomness on his side tu avoid guessers).
I see no reason why the program describe above could not be seen as an acceptable Omega following Newcomb’s problem rules.
Not communicating the actual real game is just here to avoid cheaters and enforce that the actual experiment will be done in the same environment sa the calibration.
I wonder if anyone would seriously choose to two-box any time with the above rules.
But then, the player never knows when they are faced with Omega, the successful predictor, which is an essential part of Newcomb’s problem.
You expect to predict even a random choice with 99% accuracy? Am I misunderstanding something? Rock-scissors-paper programs that try to detect the non-randomness of human choices do succeed against most people, but only a little better than chance, not with 99% accuracy. Against a truly random player they do not succeed at all.
But iterated Newcomb is different from original Newcomb, just as iterated PD is different from plain PD. Now, I don’t see anything wrong with studying related problems, but you yourself said that studying a different but related problem does not touch the original.
I don’t know if you have seen it, but I have posted an actual program playing Newcomb’s game. As far as I understand what I have done, this is not an Iterated Newcomb’s problem, but a single shot one. You should also notice that the calibration phase does not returns output to the player (well, I added some showing of reached accuracy, but this is not necessary).
If I didn’t overviewed some detail, the predictor accuracy is currently tuned at above 90% but any level of accuracy is reachable.
As I explained yesterday, the key point was to run some “calibration” phase before running the actual game. To make the calibration usefull I have to blur the limit between calibration and actual game or the player won’t behave as in real game while in calibration phase. Hence the program need to run a number of “maybe real” games before playing the true one. For the reason explained above we also cannot say to the user he his playing the real and last game (or he would known if he is playing a calibration game or a real one and the calibration would be useless).
But it is very clear reading source code that if the (human) player was some kind of supernatural being he could defeat the program by choosing two boxes while the prediction is one-box. It just will be a very unlikely event to the desired accuracy level.
I pretend this is a true unmodified Newcomb’s problem, all the calibration process is here only to make actually true the preassertion of the Newcomb’s problem : prediction accuracy of Omega (and verifiably so for the human player : he can read the source code and convince himself or even run the program and understand why prediction will be accurate).
As I know it Necomb’s problem does not impose the way the initial preassertion of accuracy is reached. As programming goes, I’m merely composing two functions, the first one ensuring the entry preassertion of good prediction accuracy is true.
I see a problem with the proposed method. Your program learns how often, on average, its opponent one-boxes or two-boxes. If I (as Omega) learn that someone is a one-boxer, then I can predict that they will one-box next time, put money in box B, and be proved right. But then, in an iterated game, if the one-boxer learns that I am not predicting his decision in the individual case, but have made a general prediction once and for all and thereafter always filling box B, then he can with impunity take both boxes and prove my prediction wrong.
A true Omega needs to make both P(box B full | take one box) and P(box B empty | take both boxes) high. The proposed scheme ensures that P(box B full | habitual one-boxer) and P(box B empty | habitual two-boxer) are high, which is not quite the same.
Similarly, suppose I convince Eliezer that I’m Omega. He has publicly avowed one-boxing on Newcomb, so I can skip the learning phase, fill box B, and be proved right. But if, for some reason, he suspects that I’m not a superintelligent superbeing with superpowers of prediction, and in a series of games, experiments with two-boxing, I will be exposed as an impostor.
Iterated Newcomb played between programs given access to each other’s source code would be an interesting challenge. I assume Omega doesn’t care about the money, but plays for the gratification of correctly predicting the other player’s choice. The other player is playing for the money.
A simpler, zero-sum game also suggests itself to me. This is more like Rock-Paper-Scissors than Newcomb, but again the point is to play using knowledge of the other person’s code. Each player chooses 0 or 1. Player A wins if the choices are the same, player B wins if they are different.
(This might look as if A is trying to predict B and B is trying to avoid being predicted, but the game is actually symmetric, both players doing both of these things. Swap the labels on B’s choices and B wins on equality and A on inequality.)
In classical game theory, the optimal strategy is to toss a coin, and the expected payoff is zero. The challenge is to do better against real opponents.
If I understand correctly the distinction you’re making between habitual one boxer and take one box the first kind would be about the past player history and the other one about the future. If so I guess you are right. I’m indeed using the past to make my prediction, as using the future is beyond my reach.
But I believe you’re missing the point. My program is not an iterated Newcomb’s Problem because Omega does not perform any prediction along the way. It will only perform one prediction. And that will be for the last game and the human won’t be warned. It does not care at all about the reputation of the player, but only on it’s acts in situations where he (the human player) can’t know if he is playing of not.
But another point of view is possible, and that is what comes to mind when you run the program: it is coercing the player to be either a one boxer or a two boxer if he wan’t to play at all. Any two-boxing and the player will have to spend a very long time one-boxing to reach the state when he is again seen as a one boxer. As it is written, the program is likely (to the chosen accuracy level) to make it’s prediction while the player is struggling to be a one boxer.
As a human player what comes through my mind while running my program is ok: I want to get a million dollars, henceforth I have to become a one boxer.
If my program runs as long as wished accuracy is nor reached it can reach any accuracy. Truly random numbers are also expected to deviate toward extremes sometimes in the long run (if they do not behave like that they are not random). As it is very rare events, against random players the expected accuracy would certainly never be reached in a human life.
Why I claim is the “calibration phase” described above takes place before Newcomb’s problem. When the actual game starts the situation described in Newcomb’s problem is exactly what is reached. THe description of the calibration phase could even be provided to the player to convince him Omega prediction will be accurate. At least it is convincing for me and in such a setting I would certaily believe Omega can predict my behavior. In a way you could the my calibration phase as a way for Omega to wait for the player to be ready to play truly instead of trying to cheat. As trying to cheat will only result in delaying the actual play.
OK. It may be another problem, what I did is merely replacing a perfectly accurate being with an infinitely patient one… but this one is easy to program.
I posted a possible program doing what I describe in another comment. The trick as expected is that it’s easier to change the human player understanding of the nature of omega to reach the desired predictability. In other words : you just remove human free will (and running my program the player learn very quickly that is in his best interrest), then you play. What is interresting is that the only way compatible with Newcomb’s problem description to remove his free will is to make it a one-boxer. The incentive to make it a two-boxer would be to exhibit a bad predictor and that’s not compatible with Newcomb’s problem.