That’s a creative attempt to avoid really considering Newcomb’s problem; but as I suggested earlier, the noisy real-world applications are real enough to make this a question worth confronting on its own terms.
Least Convenient Possible World: Omega is type (3), and does not offer the game at all if it calculates that its answers turn out to be contradictions (as in your example above). At any rate, you’re not capable of building or obtaining an accurate Omega’ for your private use.
Aside: If Omega sees probability p that you one-box, it puts the million dollars in with probability p, and in either case writes p on a slip of paper in that box. Omega has been shown to be extremely well-calibrated, and its p only differs substantially from 0 or 1 in the case of the jokers who’ve tried using a random process to outwit it. (I always thought this would be an elegant solution to that problem; and note that the expected value of 1-boxing with probability p should then be 1000000p+1000(1-p).)
Yes, these are extra rules of the game. But if these restrictions make rationality impossible, then it doesn’t seem human beings can be rational by your standards (as we’re already being modeled fairly often in social life)— in which case, we’ll take whatever Art is our best hope instead, and call that rationality.
Eliezer has repeatedly stated in discussions of NP that Omega only cares about the outcome, not any particular “ritual of cognition”. This is an essential part of the puzzle because once you start punishing agents for their reasoning you might as well go all the way: reward only irrational agents and say nyah nyah puny rationalists. Your Omega bounds how rational I can be and outright forbids thinking certain thoughts. In other words, the original raison d’etre was refining the notion of perfect rationality, whereas your formulation is about approximations to rationality. Well, who defines what is a good approximation and what isn’t? I’m gonna one-box without explanation and call this rationality. Is this bad? By what metric?
Believe or not, I have considered the most inconvenient worlds repeatedly while writing this, or I would have had just one or two cases instead of four.
A strategy Omega uses to avoid paradox which has the effect of punishing certain rituals of cognition because they lead to paradox is different than Omega deliberately handicapping your thought process. It is not a winning strategy to pursue a line of thought that produces a paradox instead of a winning decision. I would wait until Omega forbids strategies that would otherwise win before complaining that he “bounds how rational I can be”.
Maybe see it as a competition of wits. Between two agents whose personal goal is or isn’t compatible. If they are not of similar capability, the one with more computational resources, and how well those resources are being used, is the one which will get its way, against the other’s will if necessary. If you were “bigger” than omega, then you’d be the one to win, no matter which weird rules omega would wish to use. But omega is bigger … by definition.
In this case, the only way for the smaller agent to succeeds is to embed his own goals into the other agent’s. In practice agents aren’t omniscient or omnipotent, so even an agent orders of magnitude more powerful than another, may still fail against the latter. That would become increasingly unlikely, but not totally impossible (as in, playing lotteries).
If the difference in power is even small enough, then both agents ought to cooperate and compromise, both, since in most cases that’s how they can maximize their gains.
But in the end, once again, rationality is about reliably winning in as many cases as possible. In some cases, however unlikely and unnatural they may seem, it just can’t be achieved. That’s what optimization processes, and how powerful they are, are about. They steer the universe into very unlikely states. Including states where “rationality” is counterproductive.
Yes! Where is the money? A battle of wits has begun! It ends when a box is opened.
Of course, it’s so simple. All I have to do is divine from what I know of Omega: is it the sort of agent who would put the money in one box, or both? Now, a clever agent would put little money into only one box, because it would know that only a great fool would not reach for both. I am not a great fool, so I can clearly not take only one box. But Omega must have known I was not a great fool, and would have counted on it, so I can clearly not choose both boxes.
Truly, Omega must admit that I have a dizzying intellect.
On the other hand, perhaps I have confused this with something else.
My version of Omega still only cares about its prediction of your decision; it just so happens that it doesn’t offer the game if it predicts “you will 2-box if and only if I predict you will 1-box”, and it plays probabilistically when it predicts you decide probabilistically. It doesn’t reward you for your decision algorithm, only for its outcome— even in the above cases.
Yes, I agree this is about approximations to rationality, just like Bayescraft is about approximating the ideal of Bayesian updating (impossible for us to achieve since computation is costly, among other things). I tend to think such approximations should be robust even as our limitations diminish, but that’s not something I’m confident in.
Well, who defines what is a good approximation and what isn’t?
A cluster in conceptspace. Better approximations should have more, not less, accurate maps of the territory and should steer higher proportions of the future into more desirable regions (with respect to our preferences).
I’m gonna one-box without explanation and call this rationality. Is this bad? By what metric?
I think “without explanation” is bad in that it fails to generalize to similar situations, which I think is the whole point. In dealing with agents who model your own decisions in advance, it’s good to have a general theory of action that systematically wins against other theories.
Your fix is a kludge. I could randomize: use the detector to determine Omega’s p and then use 1-p, or something like that. Give me a general description of what your Omega does, and I’ll give you a contradiction in the spirit of my original post. Patch the holes all you want. Predicting the future always involves a contradiction, it’s just more or less hard to tease out. You can’t predict the future and outlaw contradictions by fiat; it is logically impossible. This was one of the points of my post.
Your fix is a bit of a kludge. I could randomize: use my detector to determine p, and then use 1-p. So for total consistency you should amend Omega to “protect” the value of p, and ban the agent if p is tampered with. Now it sounds bulletproof, right?
But here’s the rub: the agent doesn’t need a perfect replica of Omega. A half-assed one will do fine. In fact, if a certain method of introspection into your initial state allowed Omega to determine the value of p, then any weak attempt at introspection will give you some small but non-zero information about what p Omega detected. So every living person will fail your Omega’s test. My idea with the scanner was just a way to “externalize” the introspection, making the contradiction stark and evident.
I could randomize: use my detector to determine p, and then use 1-p.
In this case, Omega figures out you would use that detector and predicts you will use 1-p. If your detector is effective, it will take into account that Omega knows about it, and will figure that Omega predicted 1-(1-p) = p. But Omega would have realized that the detector could do that. This is the beginning of an infinite recursion attempting to resolve a paradox, no different because we are using probabilities instead of Booleans. Omega recognizes this and concludes the game is not worth playing. If you and your detector are rational, you should too, and find a different strategy. (Well, Omega could predict a probability of .5 which is stable, but a strategy to take advantage of this would lead to paradox.)
Omegas of type 3 don’t use simulations. If Omega is a simulator, see case 2.
...why is everybody latching on to 3? A brainwave-reading Omega is a pathetic joke that took no effort to kill. Any realistic Omega would have to be type 2 anyway.
Paradoxes show that your model is bad. My post was about defining non-contradictory models of Newcomb’s problem and seeing what we can do with them.
Could you taboo “simulation” and explain what you are prohibiting Omega from doing by specifying that Omega does not use simulations? Presumably this still allows Omega to make predictions.
That one’s simple: prohibit indexical uncertainty. I must be able to assume that I am in the real world, not inside Omega. So should my scanner’s internal computation—if I anticipate it will be run inside Omega, I will change it accordingly.
Edit: sorry, now I see why exactly you’re asked. No, I have no proof that my list of Omega types is exhaustive. There could be a middle ground between types 2 and 3: an Omega that doesn’t simulate you, but still somehow prohibits you from using another Omega to cheat. But, as orthonormal’s examples show, such a machine doesn’t readily spring to mind.
Indexical uncertainty is a property of you, not Omega.
Saying Omega cannot create a situation in which you have indexical uncertainty is too vague. What process of cognition is prohibited to Omega that prevents producing indexical uncertainty, but still allows for making calibrated, discriminating predictions?
You’re digging deep. I already admitted that my list of Omegas isn’t proven to be exhaustive and probably can never be, given how crazy the individual cases sound. The thing I call a type 3 Omega should better be called a Terminating Omega, a device that outputs one bit in bounded time given any input situation. If Omega is non-terminating—e.g. it throws me out of the game on predicting certain behavior, or hangs forever on some inputs—of course such an Omega doesn’t necessarily have to be a simulation. But then you need a halfway credible account of what it does, because otherwise the problem is unformulated and incomplete.
The process you’ve described (Omega realizes this, then realizes that...) sounded like a simulation—that’s why I referred you to case 2. Of course you might have meant something I hadn’t anticipated.
Part of my motivation for digging deep on this issue is that, although I did not intend for my description of Omega and the detector reasoning about each other to be based on a simulation, I could see after you brought it up that it might be interpreted that way. I thought if I knew on a more detailed level what we mean by “simulation”, I would be able to tell if I had implicitly assumed that Omega was using one. However, any strategy I come up with for making predictions seems like something I could consider a simulation, though it might lack detail, and through omitting important details, be inaccurate. Even just guessing could be considered a very undetailed, very inaccurate simulation.
I would like a definition of simulation that doesn’t lead to this conclusion, but in case there isn’t one, suppose the restriction against simulation really means that Omega does not use a perfect simulation, and you have a chance to resolve the indexical uncertainty.
I can imagine situations in which an incomplete, though still highly accurate, simulation provides information to the simulated subject to resolve the indexical uncertainty, but this information is difficult or even impossible to interpret.
For example, suppose Omega does use a perfect simulation, except that he flips a coin. In the real world, Omega shows you the true result of the coin toss, but he simulates your response as if he shows you the opposite result. Now you still don’t know if you are in a simulation or reality, but you are no longer guaranteed by determinism to make the same decision in each case. You could one box if you see heads and two box if you see tails. If you did this, you have a 50% probability that the true flip was heads, so you gain nothing, and a 50% probability that the true flip was tails and you gain $1,001,000, for an expected gain of $500,500. This is not as good as if you just one box either way and gain $1,000,000. If Omega instead flips a biased coin that shows tails 60% of the time, and tells you this, then the same strategy has an expected gain of $600,600, still not as good as complete one-boxing. But if the coin was biased to show tails 1000 times out of 1001, then the strategy expects to equal one-boxing, and it will do better for a more extreme bias.
So, if you suppose that Omega uses an imperfect simulation (without the coin), you can gather evidence about if you are in reality or the simulation. You would need to achieve a probability of greater than 1000/1001 that you are in reality before it is a good strategy to two box. I would be impressed with a strategy that could accomplish that.
As for terminating, if Omega detects a paradox, Omega puts money in box 1 with 50% probability. It is not a winning strategy to force this outcome.
It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can’t cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by “detecting a paradox”. Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be “random” and what part is evaluated exactly? An answer to this question might clear things up.
One way Omega might prevent paradox is by adding an arbitrary time limit, say one hour, for you to choose whether to one box or two box. Omega could then run the simulation, however accurate, up to the limit of simulated time, or when you actually make a decision, whichever comes first. Exceeding the time limit could be treated as identical to two boxing. A more sophisticated Omega that can search for a time in the simulation when you have made a decision in constant time, perhaps by having the simulation state described by a closed form function with nice algebraic properties, could simply require that you eventually make a decision. This essentially puts the burden on the subject not to create a paradox, or anything that might be mistaken for a paradox, or just take too long to decide.
Then what part of the world is assumed to be “random” and what part is evaluated exactly?
Well Omega could give you a pseudo random number generator, and agree to treat it as a probabilistic black box when making predictions. It might make sense to treat quantum decoherence as giving probabilities to observe the different macroscopic outcomes, unless something like world mangling is true and Omega can predict deterministically which worlds get mangled. Less accurate Omegas could use probability to account for their own inaccuracy.
In good implementations we can’t cheat, in bad ones we can
Even better, in principal, though it would be computationally difficult, describe different simulations with different complexities and associated Occam priors, and with different probabilities of Omega making correct predictions. From this we could determine how much of a track record Omega needs before we consider one boxing a good strategy. Though I suspect actually doing this would be harder than making Omega’s predictions.
That’s a creative attempt to avoid really considering Newcomb’s problem; but as I suggested earlier, the noisy real-world applications are real enough to make this a question worth confronting on its own terms.
Least Convenient Possible World: Omega is type (3), and does not offer the game at all if it calculates that its answers turn out to be contradictions (as in your example above). At any rate, you’re not capable of building or obtaining an accurate Omega’ for your private use.
Aside: If Omega sees probability p that you one-box, it puts the million dollars in with probability p, and in either case writes p on a slip of paper in that box. Omega has been shown to be extremely well-calibrated, and its p only differs substantially from 0 or 1 in the case of the jokers who’ve tried using a random process to outwit it. (I always thought this would be an elegant solution to that problem; and note that the expected value of 1-boxing with probability p should then be 1000000p+1000(1-p).)
Yes, these are extra rules of the game. But if these restrictions make rationality impossible, then it doesn’t seem human beings can be rational by your standards (as we’re already being modeled fairly often in social life)— in which case, we’ll take whatever Art is our best hope instead, and call that rationality.
So what do you do in this situation?
Eliezer has repeatedly stated in discussions of NP that Omega only cares about the outcome, not any particular “ritual of cognition”. This is an essential part of the puzzle because once you start punishing agents for their reasoning you might as well go all the way: reward only irrational agents and say nyah nyah puny rationalists. Your Omega bounds how rational I can be and outright forbids thinking certain thoughts. In other words, the original raison d’etre was refining the notion of perfect rationality, whereas your formulation is about approximations to rationality. Well, who defines what is a good approximation and what isn’t? I’m gonna one-box without explanation and call this rationality. Is this bad? By what metric?
Believe or not, I have considered the most inconvenient worlds repeatedly while writing this, or I would have had just one or two cases instead of four.
A strategy Omega uses to avoid paradox which has the effect of punishing certain rituals of cognition because they lead to paradox is different than Omega deliberately handicapping your thought process. It is not a winning strategy to pursue a line of thought that produces a paradox instead of a winning decision. I would wait until Omega forbids strategies that would otherwise win before complaining that he “bounds how rational I can be”.
Maybe see it as a competition of wits. Between two agents whose personal goal is or isn’t compatible. If they are not of similar capability, the one with more computational resources, and how well those resources are being used, is the one which will get its way, against the other’s will if necessary. If you were “bigger” than omega, then you’d be the one to win, no matter which weird rules omega would wish to use. But omega is bigger … by definition.
In this case, the only way for the smaller agent to succeeds is to embed his own goals into the other agent’s. In practice agents aren’t omniscient or omnipotent, so even an agent orders of magnitude more powerful than another, may still fail against the latter. That would become increasingly unlikely, but not totally impossible (as in, playing lotteries).
If the difference in power is even small enough, then both agents ought to cooperate and compromise, both, since in most cases that’s how they can maximize their gains.
But in the end, once again, rationality is about reliably winning in as many cases as possible. In some cases, however unlikely and unnatural they may seem, it just can’t be achieved. That’s what optimization processes, and how powerful they are, are about. They steer the universe into very unlikely states. Including states where “rationality” is counterproductive.
Yes! Where is the money? A battle of wits has begun! It ends when a box is opened.
Of course, it’s so simple. All I have to do is divine from what I know of Omega: is it the sort of agent who would put the money in one box, or both? Now, a clever agent would put little money into only one box, because it would know that only a great fool would not reach for both. I am not a great fool, so I can clearly not take only one box. But Omega must have known I was not a great fool, and would have counted on it, so I can clearly not choose both boxes.
Truly, Omega must admit that I have a dizzying intellect.
On the other hand, perhaps I have confused this with something else.
My version of Omega still only cares about its prediction of your decision; it just so happens that it doesn’t offer the game if it predicts “you will 2-box if and only if I predict you will 1-box”, and it plays probabilistically when it predicts you decide probabilistically. It doesn’t reward you for your decision algorithm, only for its outcome— even in the above cases.
Yes, I agree this is about approximations to rationality, just like Bayescraft is about approximating the ideal of Bayesian updating (impossible for us to achieve since computation is costly, among other things). I tend to think such approximations should be robust even as our limitations diminish, but that’s not something I’m confident in.
A cluster in conceptspace. Better approximations should have more, not less, accurate maps of the territory and should steer higher proportions of the future into more desirable regions (with respect to our preferences).
I think “without explanation” is bad in that it fails to generalize to similar situations, which I think is the whole point. In dealing with agents who model your own decisions in advance, it’s good to have a general theory of action that systematically wins against other theories.
Your fix is a kludge. I could randomize: use the detector to determine Omega’s p and then use 1-p, or something like that. Give me a general description of what your Omega does, and I’ll give you a contradiction in the spirit of my original post. Patch the holes all you want. Predicting the future always involves a contradiction, it’s just more or less hard to tease out. You can’t predict the future and outlaw contradictions by fiat; it is logically impossible. This was one of the points of my post.
Your fix is a bit of a kludge. I could randomize: use my detector to determine p, and then use 1-p. So for total consistency you should amend Omega to “protect” the value of p, and ban the agent if p is tampered with. Now it sounds bulletproof, right?
But here’s the rub: the agent doesn’t need a perfect replica of Omega. A half-assed one will do fine. In fact, if a certain method of introspection into your initial state allowed Omega to determine the value of p, then any weak attempt at introspection will give you some small but non-zero information about what p Omega detected. So every living person will fail your Omega’s test. My idea with the scanner was just a way to “externalize” the introspection, making the contradiction stark and evident.
Any other ideas on how Omega should behave?
In this case, Omega figures out you would use that detector and predicts you will use 1-p. If your detector is effective, it will take into account that Omega knows about it, and will figure that Omega predicted 1-(1-p) = p. But Omega would have realized that the detector could do that. This is the beginning of an infinite recursion attempting to resolve a paradox, no different because we are using probabilities instead of Booleans. Omega recognizes this and concludes the game is not worth playing. If you and your detector are rational, you should too, and find a different strategy. (Well, Omega could predict a probability of .5 which is stable, but a strategy to take advantage of this would lead to paradox.)
Omegas of type 3 don’t use simulations. If Omega is a simulator, see case 2.
...why is everybody latching on to 3? A brainwave-reading Omega is a pathetic joke that took no effort to kill. Any realistic Omega would have to be type 2 anyway.
Paradoxes show that your model is bad. My post was about defining non-contradictory models of Newcomb’s problem and seeing what we can do with them.
Could you taboo “simulation” and explain what you are prohibiting Omega from doing by specifying that Omega does not use simulations? Presumably this still allows Omega to make predictions.
That one’s simple: prohibit indexical uncertainty. I must be able to assume that I am in the real world, not inside Omega. So should my scanner’s internal computation—if I anticipate it will be run inside Omega, I will change it accordingly.
Edit: sorry, now I see why exactly you’re asked. No, I have no proof that my list of Omega types is exhaustive. There could be a middle ground between types 2 and 3: an Omega that doesn’t simulate you, but still somehow prohibits you from using another Omega to cheat. But, as orthonormal’s examples show, such a machine doesn’t readily spring to mind.
Indexical uncertainty is a property of you, not Omega.
Saying Omega cannot create a situation in which you have indexical uncertainty is too vague. What process of cognition is prohibited to Omega that prevents producing indexical uncertainty, but still allows for making calibrated, discriminating predictions?
You’re digging deep. I already admitted that my list of Omegas isn’t proven to be exhaustive and probably can never be, given how crazy the individual cases sound. The thing I call a type 3 Omega should better be called a Terminating Omega, a device that outputs one bit in bounded time given any input situation. If Omega is non-terminating—e.g. it throws me out of the game on predicting certain behavior, or hangs forever on some inputs—of course such an Omega doesn’t necessarily have to be a simulation. But then you need a halfway credible account of what it does, because otherwise the problem is unformulated and incomplete.
The process you’ve described (Omega realizes this, then realizes that...) sounded like a simulation—that’s why I referred you to case 2. Of course you might have meant something I hadn’t anticipated.
Part of my motivation for digging deep on this issue is that, although I did not intend for my description of Omega and the detector reasoning about each other to be based on a simulation, I could see after you brought it up that it might be interpreted that way. I thought if I knew on a more detailed level what we mean by “simulation”, I would be able to tell if I had implicitly assumed that Omega was using one. However, any strategy I come up with for making predictions seems like something I could consider a simulation, though it might lack detail, and through omitting important details, be inaccurate. Even just guessing could be considered a very undetailed, very inaccurate simulation.
I would like a definition of simulation that doesn’t lead to this conclusion, but in case there isn’t one, suppose the restriction against simulation really means that Omega does not use a perfect simulation, and you have a chance to resolve the indexical uncertainty.
I can imagine situations in which an incomplete, though still highly accurate, simulation provides information to the simulated subject to resolve the indexical uncertainty, but this information is difficult or even impossible to interpret.
For example, suppose Omega does use a perfect simulation, except that he flips a coin. In the real world, Omega shows you the true result of the coin toss, but he simulates your response as if he shows you the opposite result. Now you still don’t know if you are in a simulation or reality, but you are no longer guaranteed by determinism to make the same decision in each case. You could one box if you see heads and two box if you see tails. If you did this, you have a 50% probability that the true flip was heads, so you gain nothing, and a 50% probability that the true flip was tails and you gain $1,001,000, for an expected gain of $500,500. This is not as good as if you just one box either way and gain $1,000,000. If Omega instead flips a biased coin that shows tails 60% of the time, and tells you this, then the same strategy has an expected gain of $600,600, still not as good as complete one-boxing. But if the coin was biased to show tails 1000 times out of 1001, then the strategy expects to equal one-boxing, and it will do better for a more extreme bias.
So, if you suppose that Omega uses an imperfect simulation (without the coin), you can gather evidence about if you are in reality or the simulation. You would need to achieve a probability of greater than 1000/1001 that you are in reality before it is a good strategy to two box. I would be impressed with a strategy that could accomplish that.
As for terminating, if Omega detects a paradox, Omega puts money in box 1 with 50% probability. It is not a winning strategy to force this outcome.
It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can’t cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by “detecting a paradox”. Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be “random” and what part is evaluated exactly? An answer to this question might clear things up.
One way Omega might prevent paradox is by adding an arbitrary time limit, say one hour, for you to choose whether to one box or two box. Omega could then run the simulation, however accurate, up to the limit of simulated time, or when you actually make a decision, whichever comes first. Exceeding the time limit could be treated as identical to two boxing. A more sophisticated Omega that can search for a time in the simulation when you have made a decision in constant time, perhaps by having the simulation state described by a closed form function with nice algebraic properties, could simply require that you eventually make a decision. This essentially puts the burden on the subject not to create a paradox, or anything that might be mistaken for a paradox, or just take too long to decide.
Well Omega could give you a pseudo random number generator, and agree to treat it as a probabilistic black box when making predictions. It might make sense to treat quantum decoherence as giving probabilities to observe the different macroscopic outcomes, unless something like world mangling is true and Omega can predict deterministically which worlds get mangled. Less accurate Omegas could use probability to account for their own inaccuracy.
Even better, in principal, though it would be computationally difficult, describe different simulations with different complexities and associated Occam priors, and with different probabilities of Omega making correct predictions. From this we could determine how much of a track record Omega needs before we consider one boxing a good strategy. Though I suspect actually doing this would be harder than making Omega’s predictions.