The Ultimate Newcomb’s Problem
You see two boxes and you can either take both boxes, or take only box B. Box A is transparent and contains $1000. Box B contains a visible number, say 1033. The Bank of Omega, which operates by very clear and transparent mechanisms, will pay you $1M if this number is prime, and $0 if it is composite. Omega is known to select prime numbers for Box B whenever Omega predicts that you will take only Box B; and conversely select composite numbers if Omega predicts that you will take both boxes. Omega has previously predicted correctly in 99.9% of cases.
Separately, the Numerical Lottery has randomly selected 1033 and is displaying this number on a screen nearby. The Lottery Bank, likewise operating by a clear known mechanism, will pay you $2 million if it has selected a composite number, and otherwise pay you $0. (This event will take place regardless of whether you take only B or both boxes, and both the Bank of Omega and the Lottery Bank will carry out their payment processes—you don’t have to choose one game or the other.)
You previously played the game with Omega and the Numerical Lottery a few thousand times before you ran across this case where Omega’s number and the Lottery number were the same, so this event is not suspicious.
Omega also knew the Lottery number before you saw it, and while making its prediction, and Omega likewise predicts correctly in 99.9% of the cases where the Lottery number happens to match Omega’s number. (Omega’s number is chosen independently of the lottery number, however.)
You have two minutes to make a decision, you don’t have a calculator, and if you try to factor the number you will be run over by the trolley from the Ultimate Trolley Problem.
Do you take only box B, or both boxes?
- Nate Soares on the Ultimate Newcomb’s Problem by 31 Oct 2021 19:42 UTC; 57 points) (
- The Interrupted Ultimate Newcomb’s Problem by 10 Sep 2013 23:04 UTC; 5 points) (
- 16 Sep 2013 14:01 UTC; 4 points) 's comment on Notes on logical priors from the MIRI workshop by (
- Newcomb’s Lottery Problem by 27 Jan 2022 16:28 UTC; 1 point) (
- 13 Jan 2014 16:19 UTC; 0 points) 's comment on Anthropic Atheism by (
As wedifrid said, this is approximately transparent Newcomb plus distractions. Given Eliezer’s clarifications (Omega knows the lottery numbers and is accurate even when the Omega and lottery numbers match), I’ll ask how two algorithms would perform against Omega: OneBoxBot, which always one-boxes, and ConditionalBot, which one-boxes unless the lottery and Omega numbers match, in which case it two-boxes. I’ll ignore the tiny error rates in computing payoffs.
Case 1: the lottery is going to output a prime number.
Against OneBoxBot, Omega delivers a prime number, which may or may not match the lottery number. OneBoxBot gets a payoff of $1MM.
Against ConditionalBot, Omega must deliver a prime number different from the lottery number (if it matched the lottery Omega would be handing over a prime number with a predicted response of two-boxing from Conditional). So ConditionalBot one-boxes and gets a payoff of $1MM.
Case 2: the lottery will output a composite number.
OneBoxBot will one-box, so Omega must provide it with a prime number (which will not match the lottery). So OneBoxBot gets a payoff of $3MM every time the lottery randomly outputs a composite number.
Against ConditionalBot, Omega has two choices. It could deliver a prime number which does not match the lottery. This would lead ConditionalBot to one-box and get a total payoff of $3MM, the same as OneBoxBot. Alternatively, Omega could match the composite lottery number, leading ConditionalBot to two-box and get a payoff of $2,001,000, $999,000 worse than OneBoxBot’s payoff.
So: both algorithms receive lottery wins with equal frequency, OneBoxBot always performs at least as well as ConditionalBot across the possibilities, and ConditionalBot performs worse when the lottery outputs a composite and Omega chooses to match it. Thus we should not adopt ConditionalBot over OneBoxBot and should one-box faced with the Ultimate Newcomb problem.
There is a big and implicit step that is worth explicating here, because most people who first approach Newcomb-like problems miss it completely:
TREAT HUMANS AS BOTS
By a bot I mean an algorithm, of course. An algorithm Omega can analyze for all possible combination of inputs.
That this step is valid follows from the problem’s stipulation that Omega can predict your actions. In other words, it knows your output for any combination of inputs it cares to give you.
Whether Omega does this by running your algorithm in a sandbox or by analyzing your code does not affect the answer to the puzzle, since the end result is the same. But the sandboxing version can often make it easier to find the solution, because it lets one rely on the Reflective Equilibrium of sorts: you cannot tell when deciding what to do whether you are in an Omega’s simulation of you or not, so you may as well assume that you are.
TL;DR: to Omega, you are a bot, so write down all relevant algorithms and analyze/run them before picking a winning one.
This is what I did too. One big advantage is that it changes Omega’s predictive abilities from mysterious magic to a simple process that it’s possible to completely analyse without getting confused.
I’m afraid that, as written, I cannot answer the problem’s final question, as by the time it was asked...
… I’d been hit by the trolley.
Has anyone actually tried to answer the Ultimate Trolley Problem? I’m thinking right.
Yeah, same. But I figured it wouldn’t violate the spirit of the exercise to pretend to turn the clock back two minutes and only set it going again once I understood the whole post!
(I then spent about 90 seconds mentally flailing about and tying myself in knots, before noticing I only had a few seconds left. At which point I told myself to find the simplest, dumbest solution that might possibly work; better to give a suboptimal answer than to get splattered by a trolley. And the simplest, dumbest solution is that the Lottery number & payout are wholly independent of what Omega or I do, so I can completely ignore it to reduce the problem to the usual Newcomb’s problem. Hence take only box B.)
I trust that observing that it’s not a multiple of 2, 3, 5, or 11 doesn’t count.
Depending on what does or doesn’t count, it might be possible to be sure of whether 1033 is prime without “trying to factor” it (i.e., by checking whether it’s divisible (without actually carry out the divisions) by any prime number less than sqrt(1033)).
So I interpreted “try to factor it” pessimistically as “try to figure out for sure whether it’s prime”.
Right. And, indeed, when I need to check in my head whether a smallish number like 1033 is prime, most of what I do isn’t factoring. E.g.: 1033+17 = 1050 = 50.21, ruling out {2,3,5,7,17} in a single go but not identifying any factors or (even approximate) quotients. 1033-13 = 1020 = 20.51, ruling out {2,3,5,13,17} instead. Etc.
I also assumed that Eliezer’s intention was that you shouldn’t be doing this sort of thing.
[EDITED to fix formatting screwage—asterisks treated as markup rather than multiplication—my apologies for any mystification this caused.]
I’m curious, what’s the method you’re using there?
(Initial remark: damn, I see that what I wrote got screwed up by asterisks getting treated as markup characters; perhaps everything would have been clear without that. Will fix once I’ve finished writing this.)
I’m not sure it really warrants dignifying with the term “method”, but:
The relations I’m looking for are of the form n=a+b or n=a-b where a,b (1) are easy to factor because they’re products of small primes, and (2) have no common factor. In that case, you know that n isn’t a multiple of any of those prime numbers—so, e.g., if a, but not b, is a multiple of p, then a+b and a-b are not multiples of p.
The easiest case is where b is itself a small prime number like 17 or 13. Why 17 and 13? Because the number ends in 3, so subtracting something ending in 3 or adding something ending in 7 will give us (an easy-to-deal-with factor of 10, and) something nicely smaller. In some cases it’s easier to remove digits from the start of the number rather than the end; for a trivial example, 1033 isn’t a multiple of 103 because it’s 1030 + 3.
Let’s do 1033 a bit more thoroughly. It’s between 32^2=1024 and 33^2=1089, so we need to check primes up to 31. 1033=1050-17 rules out 2,3,5,7,17. 1033=1020+13 rules out 2,3,5,13,17. 1033=1000+33 rules out 2,3,5,11. (That’s all the primes up to 17 done, leaving 19,23,29,31.) 1033=1010+23 rules out 2,5,(101),23. At this point, actually, I’d try dividing by 19,29,31, but let’s carry on. For 31 we’ll subtract 93: 1033=940+93, and 940=2.2.5.47 so we’ve ruled out 31. For 19 we’ll subtract 950, leaving 83 which is prime, so 19 is ruled out. For 29 I suppose the best we can do is to subtract 870, leaving 163 which I happen to know is prime because of exp(pi sqrt(163)) but in any case clearly isn’t a multiple of 29. So we’re done and 1033 is prime. And I didn’t factorize 1033 unless proving something is prime by a roundabout route counts as factorizing it. If one of these things had turned out to have the same prime number dividing both summands, there’d still have been extra work to do to get the quotient.
Primes less than sqrt(1033) for which I know of no really obvious tricks (i.e. the digit adding tricks for them aren’t so simple one can trivially do them in your head): 7, 13, 17, 19, 23, 29, 31
Also, since the scenario said we’ve done this many times and we haven’t been trolleyed yet, it can’t be all that easy to get trolleyed.
(eta: why the −1? Both points seem solid to me, even in the light of the additional trick below—there are several possible factors remaining, and it’s not like I was enumerating these during my 2 minutes. Moreover, the 1001 trick works for 1033, but doesn’t help so much with other typical numbers of that general magnitude—say, 1537, that it’s something you’re liable to do by accident)
Maybe you get run over by the trolley only if you try to factor Omega’s number, and all the times before this Omega’s number and the lottery number were different, and there’s no much point in trying to factor the lottery number in that case since (assuming linear utility of money) it has no relevance on how many boxes you should take.
1001 = 7 x 11 x 13, so each of those divides 1033 iff it divides 32 (and divides 314,159,265 iff it divides 265 − 159 + 314).
I’m not sure that both these statements can be true at the same time.
What does Omega pick if my algorithm is, “if the number ends in 3, two-box, else one-box”? Seems it will be right if it either picks a composite ending in 3 (where I don’t get the million) or a prime not ending in 3 (where I do), so how does it decide?
Maybe I’m confused, but can’t it just… pick one however it wants? As long as the final tally comes to “correct in 99.9% of cases” the problem is satisfied and it doesn’t matter how it decided, does it? Why does it matter if there are muliple correct choices?
Edit: Nevermind, I see what you mean. If your algorithm depends on the Lotto, and Omega depends on your algorithm, then Omega depends on the Lotto. You can actually influence how often this “same number” scenario happens via your algorithm.
Excellent point.
A plausible setup:
There are two equal-sized groups of large numbers:
Prime numbers
Composite numbers without any easy factorization
… such that you can’t be expected to distinguish them in the allowed time.
The lottery works by picking at one of the numbers at random.
Omega’s algorithm: for each number in those groups, predict whether you would two-box (given the already-determined lottery number). In the set of prime numbers for which you one-box, and composite numbers for which you two-box, pick a number at random and show it to you (so if it predicts you always two-box, it’s sure to pick a composite number, etc.).
In that setup, if your algorithm is “if the number ends in 3, two-box, else one-box”, then Omega will just pick a random number among the composites ending in 3 and the primes ending in 1, 3, 7, or 9.
And if your algorithm is “If the number is the same as the lottery, two-box, else, one-box”, then yeah the chosen number will not be completely independent of the lottery number (Omega can only pick the lottery number if it’s composite, or if it makes a prediction error), but that looks independent enough for the spirit of the post.
If you take the second statement to mean, “There exists an algorithm for Omega satisfying the probabilities for correctness in all cases, and which sometimes outputs the same number as NL, which does not take NL’s number as an input, for any algorithm Player taking NL’s and Omega’s numbers as input,” then this …seems… true.
I haven’t yet seen a comment that proves it, however. In your example, let’s assume that we have some algorithm for NL with some specified probability of outputting a prime number, and some specified probability it will end in 3, and maybe some distribution over magnitude. Then Omega need only have an algorithm that outputs combinations of primeness and 3-endedness such that the probabilities of outcomes are satisfied, and which sometimes produces coincidences.
For some algorithms of NL, this is clearly impossible (e.g. NL always outputs prime, c.f. a Player who always two-boxes). What seems less certain is whether there exists an NL for which Omega can always generate an algorithm (satisfying both 99.9% probabilities) for any algorithm of the Player.
This is to say, what we might have in the statement of the problem is evidence for what sort of algorithm the Natural Lottery runs.
Perhaps what Eliezer means is that the primeness of Omega’s number may be influenced by the primeness NL’s number, but not by which number specifically? Maybe the second statement is meant to suggest something about the likelihood of there being a coincidence?
CDT:
Two box, obviously.
EDT:
Assuming this is your last game, two box. Two boxing is evidence that 1033 is composite, so you’ll get more money.
If you will continue playing for a long time, one box. This is evidence that you will go with the “always one box” strategy, which will result in more money. More generally, it is evidence that you will go with a TDT-style strategy more often in the future, and get higher payouts as a result.
TDT:
One box. The always one box strategy has the highest payout.
I’m not sure if I have the right terminology with TDT, but these are the three obvious moves and the reasoning for them.
Ahh, good point. This explains the (likely) motivation for Eliezer to contrive this scenario. It’s a case where one boxing is the right choice but even EDT gets it wrong. Usually at least one of CDT or EDT gets it right.
One-box? I would have said two-box, under the bizarre theory that I can thereby cause the number to be composite.
Checks… hmm. Well, that was unlikely.
Although the title was originally selected in jest, I think this may actually be the Ultimate Newcomb’s Problem because it tempts the largest number of people—EDTers, CDTers, and apparently a substantial portion of LWers—to two-box.
I was surprised that so many people wanted to two-box. Maybe they thought you were only a level 1 agent? I mean, I admit that was part of my deliberation, but the math backs me up, so....
Facebook discussion.
This seems to be just transparent Newcomb’s problem combined with some redundant words. Unless I’ve missed something in the fine print this is a simple “one box” situation.
All decision theory problems are simple ones combined with some confusion that a more enlightened mind would see as redundant and isomorphic to the simple.
Then again, a more enlightened mind would know whether 1033 is prime.
I came to appreciate this particular example when it was observed that EDT and CDT both two box (despite this being a bad decision). Usually one or the other gets it right and people at times take their pick from them on a problem by problem basis (some even try to formalise this ‘take your pick’ algorithm). This scenario is in the ballpark of the simplest problem that really sets the decision theories apart.
I don’t understand why two-boxing is a bad decision when 99.9% of two-boxers will have 2 million dollars and 99.9% of the one-boxers will have only 1 million.
It is a matter of determining what is controlled by the decision procedure of the player or Omega (who is influenced by the player) and what is controlled by arithmetic and the whims of the person choosing which hypothetical problem to talk about. In this case winning or losing the lottery is pure luck while winning or losing with Omega’s game is determined by the player’s decision. While two-boxing can change the evidence of whether you won the lottery it never influences the lottery outcome one way or the other. On the other hand Omega’s choice is directly determined by the player’s decision procedure.
It may be helpful to consider another hypothetical game which has similar difficulties and which I had previously been using for my purposes in the role of ‘Ultimate Newcomb’s Problem’. Consider:
Take Newcomb’s Problem
Add ‘Transparent Boxes’ modification. (This step is optional—the prohibition on factoring makes the process somewhat opaque.)
Include random component. According to some source considered random Omega fills the big box as usual 99.99% of the time but the remaining 0.01% of the time he inverts his procedure.
Posit the problem where the player finds himself staring at an empty big box and a small box with $1,000. Ask him whether or not he takes the small box.
Many people advocate two-boxing in such a situation. I one box. I would expect people who two box in Randomized Transparent Newcomb’s to also two box here for consistency. I would consider it odd for a RTN two-boxer to One box on Ultimate Newcomb’s. In the above scenario I lose. I get $1,000. But I lose $1,000 1 time out of 10,000 and win $1,000,000 the other 9,999 times. Someone who two boxes potentially wins $1,000 up to 9,999 times out of 10,0000 (finding the equilibrium result there gets weird and depends on details in the specification).
Now I may talk of winning 9,999 times out of 10,000 but that can sound hollow when the fact remains that I lose in the only example brought up. Why do I consider it ok to not win in this case? What makes me think it is ok to optimise for different situations to the one that actually happens? Depending on the point of view this is because I don’t attempt to control random or I don’t attempt to control narrative causality. Omega’s behavior I can influence. I cannot influence assumed random sources and if I follow the improbability to the author’s choice of hypothetical then I choose not to optimize for scenarios based on how likely they are to be discussed. It would be a different question if evidence suggested I was living in a physics based on narrative causality where one-in-a-million chances occur nine times out of ten.
So in short, if the lottery doesn’t give me $2M that is because I am unlucky but if Omega doesn’t give me $1M it is because I am a dumbass. The difference between the two is subtle but critically important.
Sorry for the delay to respond, I’ve been busy last couple weeks.
I think you’ve convinced me regarding this. To discuss my own perspective on this, in the past it took me quite a while before I ‘got’ why one-boxing is the right decision in Transparent Newcomb—it was only once I started thinking of decisions as instances of a decision theory/decision procedure that I realized how a “losing” decision may actually be part of what’s a “winning” decision theory overall—and that therefore one-boxing is the correct strategy in Transparent Newcomb.
I guess that in Ultimate Newcomb, one-boxing remains a winning decision theory, though again the winning decision theory is represented in a seemingly ‘losing’ decision. That I failed to get the correct answer here means that though I had understood, I had not really grokked the logic behind this—I behaved too much as if EDT was correct instead.
Thanks for guiding me through this. Much appreciated!
Not always. Some of them are just ill-defined or impossible, combined with some confusion to stop people noticing. (eg, a version of Transparent Newcomb that doesn’t specify what Omega does if a player decides to oppose Omega’s prediction)
It’s transparent Newcomb’s problem but of the “perverse” variety where even when you see that the million is missing you’re supposed to not take the thousand. The typical (e.g. in Good and Real) version just requires you to one-box when the million is there.
I do prefer the “perverse” variant. The typical version seems so trivial. I’ve already got defined-by-the-problem certainty about the payoffs. A few photons in my eyes adds little. To give the ‘perverse’ variant more emphasis I also like to add a small amount of random noise to Omega’s choice so that the “one box when box empty” scenario isn’t self-preventing.
Cf. a bunch of comments in this thread apparently thinking you can control arithmetic...
Reading this was like hearing Vincent Price saying “your payoff matrices are useless here! Ahahahaha!” That was a legitimate source of epistemic dread.
It took me about a minute of turning it over to fully grok the structure of the problem, at which point I settled on two-boxing, which va guvf pnfr, jvgu guvf ahzore, yrnirf zr ubyqvat n zrer gubhfnaq qbyynef, orpnhfr guvf vf bar bs gur 0.01% bs havirefrf jurer Bzrtn jnf jebat.
I am pretty solidly sold on two-boxing at the moment, under the absurd-sounding premise that I can influence whether or not the number is prime with my decision. I really hope time travel isn’t possible.
Yes, I saw attempting to ambiently control basic arithmetic as the obvious solution here, too.
The optimal choice if you’re just told you’re gonna play this game a bunch of times is one-boxing. (Edit: this is because because trying to control the lottery number will just result in less games where the numbers are the same). However, if you’re guaranteed a run where the numbers come up the same, then there’s some hidden control, via the thought experiment itself. If I two box, Omega picked a composite, so in order for the scenario to have occurred at all (which by the thought experiment it’s guaranteed to have) the lottery must have come up with a composite. Calling something random doesn’t make it random. I probably wouldn’t have noticed if I hadn’t recently read GAZP vs GLUT.
If Omega maintains a 99.9% accuracy rate against a strategy that changes its decision based on the lottery numbers, it means that Omega can predict the lottery numbers. Therefore, if the lottery number is composite, Omega has multiple choices against an agent that one-boxes when the numbers are different and two-boxes when the numbers are the same: it can pick the same composite number as the lottery, in which case the agent will two-box and earn 2,001,000, or it can pick a different prime number, and have the agent one-box and earn 3,001,000. It seems like the agent that one-boxes all the time does better by eliminating the cases where Omega selects the same number as the lottery, so I would one box.
I initially thought two-box, but on thinking about it more, I’m going for one-box.
For simple numbers, let’s suppose that the lottery has a 50% chance of choosing a prime number, and that if Omega could select the same number as the lottery, he’ll do so with 10% probability.
Three simple strategies:
1) Always one-box: Gets Omega’s payout every time, wins the lottery 50% of the time. Average total payout $2M. (numbers are the same 10% of the time when the lottery is ‘prime’)
2) Always two-box: Omega never pays out, wins the lottery 50% of the time. Average total payout $1.001M. (numbers are the same 10% of the time when the lottery is ‘composite’)
3) Normally one-box, two-box when numbers are the same. Omega pays out 95% of the time. Lottery pays out 50% of the time. Average total payout $1.95M. (Numbers are the same 10% of the time when the lottery is ‘composite’)
The trick is that the question tries to lead you to the wrong counterfactual by drawing your attention to the situation where the numbers are the same. Whether you see the numbers being the same depends on your decision. In the counterfactual world where you decide something else, the lottery number doesn’t change to match Omega’s prediction. Instead, in the counterfactual world, the lottery number and Omega’s number are different.
The sixth virtue is empiricism. Nice job.
I don’t see the relevance. The commenter contemplated a hypothetical scenario through abstract thinking, there’s no empiricism here.
Actually doing the math, rather than just relying on intuition about what sounds right.
I’m not sure where you got that 95% number from for your strategy #3; it sounds like the “both numbers are the same” situation only happens once ever several thousand of runs.
Anyway, if you’re using strategy 1, then if the two numbers are the same, that means that the number is prime, and your payout for this scenario is only 1 million dollars (you lost the lottery). If you’re using strategy 3, then that means the number is not prime, and the payout is $2.001 million dollars (the number is not prime, because you’re going to double box.)
There is no difference between strategy 1 and strategy 3 except in the one scenario where both numbers are the same, and the one scenario where both numbers are the same, strategy 3 is better. Therefore, strategy 3 is always better.
Posting before reading comments:
If I one-box then (ignoring throughout the tiny probabilities of Omega being wrong) the number is prime. I receive $1M from Omega and $0 from the Lottery.
If I two-box then the number is composite, Omega pays me $1K, and the Lottery pays me $2M.
Therefore I two-box.
I think this line of reasoning relies on the Number Lottery’s choice of number being conditional on Omega’s evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery’s number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.
Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.
Those are not the only precommitments one can make for this type of situation.
Of course! I meant to say that Richard’s line of thought was mistaken because it didn’t take into account the (default) independence of Omega’s choice of number and the Number Lottery’s choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.
What? I can’t even parse that.
There IS a number in the box which is the same as the one at the Lottery Bank. The number either is prime or it is composite.
According to the hypothetical, if I two-box, there is a 99.9% correlation with Omega putting a composite number in his box, in which case my payooff is $2,001,000. There is a 0.1% correlation with Omega putting a prime number in the box in which case my payoffis $1,001,000. If the correlation is a good estimate of probability, then my expected payoff from two-boxing is $2million more or less. If I one-box, blah blah blah expected payoff is $1million.
Sorry for my poor phrasing. The Number Lottery’s number is randomly chosen and has nothing to do with Omega’s prediction of you as a two-boxer or one-boxer. It is only Omega’s choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?
Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega’s choice of number and the Lottery’s choice of number are no longer independent.
By that logic, you would not chew gum in the CGTA problem.
A crucial difference between the two problems is that in the CGTA problem I can distinguish among an inclination to chew gum, the act of chewing gum, the decision to chew gum, and the means whereby I reach that decision. If the correct causal story is that CGTA causes both abscesses and an inclination to chew gum, and that gum-chewing is protective against abscesses in the entire population, then the correct decision is to chew gum. The presence of the inclination is bad news, but not the decision, given knowledge of the inclination. This is the causal diagram:
My action in looking at this diagram and making a decision on that basis does not appear in the diagram itself. If it were added, it would be only as part of an “Other factors” node with an arrow only to “Action of chewing gum”. This introduces no problematic entanglements between my decision processes and the avoidance of abscesses. Perfect rationality is assumed, of course, so that when considering the health risks or benefits of chewing gum, I do not let my decision be irrationally swayed by the presence or absence of the inclination, only rationally swayed by whatever the actual causal relationships are.
In the present thought-experiment, it is an essential part of the setup that these distinctions are not possible. My actual decision processes, including those resulting from considering the causal structure of the problem, are by hypothesis a part of that causal structure. That is what Omega uses to make its prediction. This is the causal diagram I get:
Notice the arrow from the diagram itself to my decision-making, which makes a non-standard sort of causal diagram. To actually prove that some strategy is correct for this problem requires formalising such self-referential decision problems. I am not aware that anyone has succeeded in doing this. Formalising such reasoning is part of the FAI problem that MIRI works on.
Having drawn that causal diagram, I’m now not sure the original problem is consistently formulated. As stated, Omega’s number has no causal entanglement with the Lottery’s number. But my decision is so entangled, and quite strongly. For example, if the Lottery chooses a number whose primeness I can instantly judge, I know what I will get from the Lottery, and if Omega has chosen a different number whose primeness I cannot judge, the problem is then reduced to plain Newcomb and I one-box. If Omega’s decision shows such perfect statisical dependence with mine, and mine is strongly statisically dependent with the Lottery, Omega’s decision cannot be statistically independent of the Lottery. But by the Markov assumption, dependence implies causal entanglement. So must that assumption be dropped in self-referential causal decision theory? Something for MIRI to think about.
The case where I can see at a glance whether Omega’s number is prime is like Newcomb with a transparent box B. I (me, not the hypothetical participant) decline to offer a decision in that case.
That’s the same diagram I drew for UNP! :D
This is my reasoning exactly, as I reasoned it out, before reading comments. This seems like the “obvious” answer, at least in the timeless sense. Of course, when reading a problem like this, one more thing you have to bear in mind (outside the problem, of course, as inside the problem you do not have this additional information) is that the obvious solution is unlikely to be fully correct, or else what would be the point of Eliezer posing the problem in the first place… If one is going to accept timeless decision making, then I see no reason not to accept it in cases where it is anchored on fundamental mathematical facts rather than “just” on physical facts like whether or not you will choose to take the money, as in the standard Newcomb’s paradox.
Maybe, but obvious answers to such problems are often wrong. Often, multiple different answers are each obviously the exclusively right answer. And look at all the people in this thread one-boxing. Not so obvious to them.
My reasoning was as stated, but I’m not going to use its “obviousness” as an additional argument in favour of it. And on reading the comments and the Facebook thread, I notice that I have neglected to consider the hypothetical situations in which the two numbers are different. On considering it, it seems that I should still argue as I did, using all the available information, i.e. that on this occasion the two numbers are the same. But it is merely obvious to me that this is so; I am not at all certain.
I’m disinclined to guess the right answer on the basis of predicting the hidden purposes of someone smarter than me. But I can, as it happens, think of a reason for posing a question whose “obvious” solution is completely right. It could be just the first of a garden path series of puzzles for which the “obvious” solutions are collectively inconsistent with any known decision theory.
Upvoted for awesome epigram.
To properly decide I need to know if I am Jones or Leftie.
EDIT: Given that the Ultimate Trolley only directly kills Jones or Lefty, any agent that tries factoring either Omega’s or the lottery’s number must be Jones or Leftie. To decide whether it’s better to factor the number and die by trolley it’s necessary to know the answer to the Ultimate Trolley Problem and whether it’s better for Jones or Leftie to die. The expected utility of the final outcome of the Ultimate Trolley Problem certainly dwarfs the expected utility of either the lottery or Omega’s bank.
This game seems to have a roughly 50% chance of a fatality.
I would claim that the correct probability to hold is somewhere around 0.999 in favor of composite if you take both boxes [...]
EDIT: Looks like I was right about probabilities, but too hasty about thinking that meant you should two-box. Omega can be malicious:
Suppose we do this Primecomb + lottery experiment a jillion times. What algorithm maximizes payout over those jillion times?
One-boxing sure seems like a good plan—usually the lottery will pay out, sometimes not, but no biggie since you can’t affect it. And since there aren’t that many prime numbers, the lottery and the box don’t share numbers very often, though when they do you always lose the lottery.
But suppose you decide to two-box every time you see the lottery and the box have the same number. Now Omega’s action is undefined—if the lottery number is composite Omega can basically choose whether you’re to two-box or one-box. If we think in terms of the jillion trials of the same game, one-boxing would still be better, since when Omega undefinedly decides to make you two-box, you were going to win the lottery anyhow and could have gotten more money if Omega had decided to make you one-box.
However, if you two-box every time the numbers are the same, every time the numbers are the same you’ll win the lottery. So if you see the numbers the same, it certainly sounds reasonable to try to be part of the lottery-winning group, right?
Hold up though. Suppose we get to program Omega a little bit. One version we make nice—call it Nicemega. It never makes the numbers be the same. so I always one-box and get lots of money. Another version we make mean—Meanmega. It chooses the number on the box to minimize the money it has to pay out. If you two-box when the numbers are the same, it makes you two-box whenever it can. If you are willing to two-box when the numbers are the same and you start seeing a lot of same numbers, you should switch plans, because you’re probably getting Meanmega’d! So why should you two-box when you see the numbers the same, if it just means you’re getting Meanmega’d?
In other words, the optimal strategy really can be globally optimal, even though sometimes it requires you to take locally bad actions. Seem a little familiar?
My favorite example for this is the Unexpected Hanging paradox.
The question is—where was the flaw in the prisoner’s logic? The answer: the only flaw is that the judge really was prepared to hang the man on Friday, even though by then it’s not a surprise. Every prisoner you hang without surprise on Friday buys you four to hang with genuine surprise on the other days of the week. If you’re unwilling to pay in the coin of failed surprises, you cannot buy genuine ones. If you’re the judge, you roll a five-sided die and hang the prisoner on the corresponding day. If you roll Friday, then you damn well hang them on Friday to no surprise, or else you’re not even really trying.
A similar logic leads to the rejection of two-boxing when you see that the numbers are the same. If you aren’t willing to one-box when the lottery has rolled a Friday, er, a prime number, (and Omega has decided to be be a jerk and rub it in) then you’re not ever actually one-boxing.
This post almost convinced me. I was thinking about it in terms of a similar algorithm, “one-box unless the number is obviously composite.” Your argument convinced me that you should probably one-box even if Omega’s number is, say, six. (Even leaving aside the fact that I’d probably mess up more than one in a thousand questions that easy.) For the reasons you said, I tentatively think that this algorithm is not actually one-boxing and is suboptimal.
But the algorithm “one-box unless the numbers are the same” is different. If you were playing the regular Newcomb game, and someone credibly offered you $2M if you two-box, you’d take it. More to the point, you presumably agree that you should take it. If so, you are now operating on an algorithm of “one-box unless someone offers you more money.”
In this case, it’s just like they are offering you more money: if you two-box, it’s composite 99.9% of the time, and you get $2M.
The one thing we know about Omega is that it picks composites iff it predicts you will two-box. In the Meanmega example, it picks the numbers so that you two-box whenever it can, that just means whenever the lottery number is composite. So in all those cases, you get $2M. That you would have gotten anyway. Huh. And $1M from one-boxing if the lottery number is prime. Whereas, if you one-box, you get $1M 99.9% of the time, plus a lot of money from the lottery anyway. OK, so you’re completely right. I might have to think about this more.
Assuming Manfred is completely right, how many non-identical numbers should it take before you decide you’re not dealing with Meanmega and can start two-boxing when they’re the same?
That’s impossible (given your other specifications) against an agent who two-boxes iff the lottery number and Omega’s number match. If the lottery number is prime, this causes Omega to ensure that a different prime number is placed in its box.
Edit: Realized; it should be “The lottery number is chosen independently of Omega’s number.”
Edit again: Just realized that this phrase sneaks in a causal postulate: Omega can’t change the output of the lottery! Starting here in reasoning might make the problem a lot easier. In “logical time,” first the lottery number is selected, then you make your decision, then Omega makes eirs. Of course, formalizing this is non-trivial.
Attempting to translate this English description into a program-segment which I can do algebra on, I get a type error. I can only resolve the type error by changing a vital aspect of the rules, and I have several options for how to do so and no prior provided, so this question is unanswerable as written. This is a very common problem with decision theory work, and I think everyone should make a habit of writing decision theory questions as statically-typed programs, not as prose.
The issue is that, in order to predict whether you will take one or both boxes, Omega must supply all the inputs to your simulation, including the number that you see; and one of the inputs, is Omega’s own output. Replacing the number with a boolean that you don’t get to look at would resolve the issue, and you almost do that, by saying that you’re not allowed to factor the number, but the problem still fails to compile if you entangle your decision with any property of the number that’s even a little bit related to primeness.
That doesn’t seem completely right to me. For example, oddness is related to primeness. If I wanted to do the opposite of what Omega predicted, I might try to one-box on even numbers and two-box on odd numbers. But then Omega can just give me an odd number that isn’t prime. More generally, if we drop the lottery and simplify the problem to just transparent Newcomb’s with prime/composite, then for any player strategy that isn’t exactly “two-box if prime, one-box if composite”, Omega can find a way to be right.
Another problem is that Omega might have multiple ways to be right, e.g. if if your strategy is “one-box if prime, two-box if composite” or “one-box if odd, two-box if even”. But then it seems that regardless of how Omega chooses to break ties, as long as it predicts correctly, one-boxers cannot lose out to other strategies. That applies to the original problem as well, so I’m in favor of one-boxing there (see wedrifid’s and Carl’s comments for details).
Overall I agree that giving an underspecified problem and papering it over with “you don’t have a calculator” isn’t very nice, and it would be better to have well-specified problems in the future. For example, when Gary was describing the transparent Newcomb’s problem, he was careful to say that in the simulation both boxes are full. In our case the problem turned out to be kinda sorta solvable in the end, but I guess it was just luck.
Yep, this all seems correct; the player does not have enough degrees of freedom to prevent there from being a fixpoint, and it is possible to prove for all interpretations that no strategy does better than tying with the simple one-box strategy. But I feel, very strongly, that allowing this particular kind of ambiguity into decision theory problems is a reliably losing move. That road leads only to confusion, and that particular mistake is responsible for many (possibly most) previous failures to figure out decision theory.
There doesn’t need to be a concrete simulation where all variables attain canonical values. Instead, some variables can retain their symbolic definitions, including as results of recursive calls. Such programs can sometimes be evaluated even without explicitly posing the problem of the existence of consistent variable assignments, especially if you are allowed to make some optimizing transformations during the evaluation (eliminating variables without evaluating them).
It’s also more than an unknown boolean (prime or not) because you can check if it’s the same number as the lottery output.
On one hand, I sympathize with your argument. When Gary was designing the transparent Newcomb’s problem, he was careful to point out that the simulation sees both boxes as full.
On the other hand, can you point out exactly where Carl’s solution proposed on the Facebook thread disagrees with your claim that the problem is unsolvable?
I want to pre-commit to 1-boxing as long as the numbers are different, just like in the standard Newcomb problem. Since Omega knows that I will notice if the numbers are the same, I can decide to make a special case for this situation without affecting the standard case. But I still want to pre-commit to 1-boxing in this case, for the same reason I want to pre-commit in standard Newcomb: I can predict that Omega will be much more likely to put $1,000,000 in box B if I do so, and the causality here doesn’t allow me to influence the outcome of the lottery.
I wonder if we’re all burying the lead. You’ve played these games thousands of times, so why aren’t you already a billionaire? Are you being suckered into overpaying for the numerical lottery and that’s eating your winnings from Omega? Are you paying to play against the Omega Bank? Have you been two-boxing this whole time? We’re not getting the whole story here.
(None of that changes my answer, I’m one-boxing all the way, but we need answers!)
(Note: I haven’t checked yet to see if 1033 is prime)
So… basically, it’s the standard Newcomb’s problem, one box or two, one boxing means it’s a prime number and two boxing means it’s a composite number being displayed for the lottery, in this singular case.
I’d still probably one box here. If 1033 is prime, and I two box… well, then, Omega probably wouldn’t have picked it and we wouldn’t be discussing this scenario.
Put another way, I don’t see how the lottery number matching Omega’s number gives me any useful information about Omega’s accuracy, since the value of one number in no way depends on the other.
I’ll flip a coin. Heads I 2-box, tails I 1-box. That’s got to be pretty good in expectation. Gotta make omega work for that 99.9% accuracy!
Since this is too complicated for me to figure out in any deep sense in 2 minutes (and I’m not sure exactly what computation Omega is doing or if it’s even well-defined), I’m falling back on EDT, and two-boxing to get the $2M lottery. EDT might be suboptimal in a lot of cases (smoker’s lesion), but at least it’s unlikely to choose wrong, which is better than I can say for what I’d pick if I tried to use TDT or CDT-combined-with-uncertainty-about-who-I-am (whether I’m a copy simulated by omega to determine payoffs for the real copy of me).
Written before reading comments; The answer was decided within or close to the 2 minute window.
I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I prefer the number to be composite, therefor I take both boxes on the anticipation that when I do so I will (correctly) be able to update to 99.9% probability that the number is composite.
Thinking this through actually led me to a bit of insight on the original newcomb’s problem, namely that last part about updating my beliefs based on which action I choose to take, even when that action has no causal effects on the subject of my beliefs. Taking an action allows you to strongly update on your belief about which action you would take in that situation; in cases where that fact is causally connected to others (in this case Omega’s prediction), you can then update through those connections.
Prior to seeing the fact that the Lottery numbers matched, I would have liked to have pre-committed to one boxing in all cases. That’s how I set my deterministic algorithm.
Therefore, I will surely one box. This seems more or less identical to the classic Newcomb. Yes, I know the number is prime now, so I would like to get away with taking the second box and I should try as hard as I can to override my initial programming and two box...but unless my algorithm is unsuccessfully pre-committed, I will fail to do so.
Some folks seem to think you aught to pre-commit to set your algorithm to two box if and only if the numbers match. That’s wrong because you aren’t effecting the Lottery. All you are doing is making it so that Omega sometimes chooses a composite number which is identical to the Lottery when the Lottery is a composite number. The Lottery is random and irrelevant.
Edit: Some other commentators seem to interpret EDT as two boxing in this scenario, so I guess it does differ from classical Newcomb. Would EDT also require you to pre-commit to two-boxing, or is that just what EDT says when thrust into the scenario? (if the latter, isn’t that a huge problem?)
I think most of the commenters aren’t getting that this is a parody. Edit: It turns out I was wrong.
“Unlike these other highly-contrived hypothetical scenarios we invent to test extreme corner-cases of our reasoning, this highly-contrived hypothetical scenario is a parody. If you ever find yourself in the others, you have to take it seriously, but if you find yourself in this one, you are under no such obligation.”
Yes, that’s what I’m saying. The other ones are meant to prove a point. This one is just to make you laugh, just like the one it is named after. http://www.mindspring.com/~mfpatton/Tissues.htm
So if you found yourself in the unlikely scenario of a regular Newcomb’s Problem, you have an answer for it; but if you found yourself in the unlikely scenario of this problem, you wouldn’t feel obliged to be able to answer it?
Well… the linked Ultimate Trolley problem is a parody.I
f this is a parody, it’s evidently interesting enough to think about anyway.
Can you explain why Eliezer’s motives for writing it should limit what anyone else chooses to do with it?
ETA: Parent edited. This now makes less sense as a response.
yeah, sorry. I realized that even though the first sentence on its own was a simple true statement, it might connotate that I thought that everyone who was taking it seriously was being silly when I really just meant to innocently point out some evidence that aligned with summerstay’s opinion that it might be parody (or serious-but-parody-mimicking, or something). So I added a second sentence to disassociate myself from the connotation that might otherwise be inferred.
I’m at the current MIRI workshop, and the Ultimate Newcomb’s Problem is not a parody.
I don’t get paid on the basis of Omega’s prediction given my action. I get paid on the basis of my action given Omega’s prediction. I at least need to know the base-rate probability with which I actually one-box (or two-box), although with only two minutes, I would probably need to know the base rate at which Omega predicts that I will one-box. Actually, just getting the probability for each of P(Ix|Ox) and P(Ix|O~x) would be great.
I also don’t have a mechanism to determine if 1033 is prime that is readily available to me without getting hit by a trolley (with what probability do I get hit by the trolley, incidentally?), nor do I know the ratio of odd-numbered primes to odd-numbered composites is off-hand.
I don’t quite have enough information to solve the problem in any sort of respectable fashion. So what the heck, I two-box and hope that Omega is right and that the number is composite. But if it isn’t, then I cry into my million dollars. (With P(.1): I don’t expect to actually be sad winning $1M, especially after having played several thousand times and presumably having won at least some money in that period.)
I can’t decide anything in 2 minutes, so I’d just one-box it because I remember it as the correct solution to the original Newcomb’s problem — and hope for the best.
+EV of one boxing swamps my uncertainty over the status of 1033.
Problem lacks specification: ought we assume that Omega also predicted TNL’s number? Or was that random both to me, Omega-past, and TNL? Omega predicting correctly in 99.9% of previous cases doesn’t determine this.
(eta: Ah, was answered on the Facebook thread; Omega predicted the lottery number. Hm.)
...so if I google ‘is 2033 a prime number’ and receive the answer that it isn’t, all in under two minutes, and put the money from box A into box B and then choose box B, do I get any money?:)
EDIT:
Reading through comments it appears the idea is that you’re attempting to control whether or not omega matches the number with your choice of strategy, rather than what you get from matched or unmatched lotteries. So the idea seems to be that always taking onebox yields lower payoffs from matching lotteries but causes lotteries to match less often, which is beneficial because unmatching lotteries have better payoffs than matching lotteries.
I haven’t looked at the comments yet. Two minutes was enough thinking time to get me to one-box, put not enough time to verbalize my intuition. There was a post earlier on lw about making decisions with imperfect memory that is relevant, I think.
My intuition is something along the lines of ‘my decisions only affect whether omega gives me a million dollars or not, so the lottery doesn’t matter’.
Further thoughts: your strategy when numbers match change the information that numbers matching conveys. If you one box, it tells you that the particular round loses the lottery. If you two box, it tells you that the particular round wins.
Since you win the lottery just as often regardless of when you know that you win the lottery, the value of this information is zero. All two-boxing does is move the coincidences to coincide with winning the lottery. That, and lose you nearly a million dollars per coincidence.
(Answering before reading any other responses)
Both boxes—I want the number to be composite, so I want Omega to have selected a composite number, which he’d have more chances of doing if I two-boxed.
EDIT: wedrifid’s explanation has now mostly convinced me that one-boxing is correct instead. (My expressed logic was too much EDT-influenced, I think)
You want the number to be composite, so you also want the Numerical Lottery’s random number generator to have selected a composite number. That’s trickier to influence.
The number 1033 is prime. The rest of the hypothetical scenario is pretty confusing, and it’ll take me longer to analyze it and be confident about my conclusion than it did to determine that 1033 is prime.
You just got run over by a trolley. Death is almost certainly worse than either being given a free $1M or $2M. This illustrates that failure to consider the decision being made is in general a sub-optimal game-theoretic strategy. Fortunately Omega pays out despite your failure to take any boxes (before your demise) and so your heirs gain an additional $1M. As for your timeless influence on Omega’s decision: by the wording given (“IF … AND IF” instead of “IF … ELSE”) Omega is free to choose whichever number he likes given that neither criteria is satisfied.
Rest in Peace.
This isn’t a fair problem (in the sense that you defined somewhere else): two people both choosing to take both boxes, but via different algorithms (only one of whom factoring the number), will get different rewards.
I don’t know what “trying to factor” even would be for a number so small. It just looks like a prime. I may have seen it on a prime number list, or as a prime factor of something, or who knows where. There’s easy to construct rules for determining divisibility by it’s potential factors.
One could also use Miller-Rabin primarity test, which I in fact happen to have implemented before. Much of the public key cryptography depends on how testing a prime is easier than factoring a number. I’m pretty sure there is no general algorithm for determining whenever an algorithm is a good primarity test.
(I presume the point is that you aren’t trying to determine whenever it is prime or not, which breaks all sorts of assumptions inherent in utility maximization)
If that bothers you, how about instead of displaying the number, instead what you see is the number encrypted using a key known only to the lottery-runners and Omega?
It could be more interesting, though, if it was 7. That may better demonstrate the inconsistencies in mathematics that result from an incorrect hypothetical about your choice.
In a transparent Newcomb’s, I can simply take one empty box and leave the other, if one box is empty, and take both boxes if they are both full.
I’m not clear on what constitutes trying to factor the number. It would seem that noticing if it was odd wouldn’t count as trying to factor it, but what about forms of inductive reasoning or non-exhaustive heuristics?
Then there are a couple billion dollars in my bank account, and the marginal utility of one more million wouldn’t be that large. :-)
That’s not a bug, it’s a feature. If you’re already a billionaire, then your utility function is nice and linear for changes on the order of a million or two, so no need for angst about $2M not being close to twice as good as $1M.
I hadn’t considered this angle, but I agree with this. If I’m going to get trollied for thinking the wrong thought, I would want to try hard not to think at all since I might accidentally think about how I already know whether the number is prime.
On the other hand, I don’t think this is a consequence the post intended.
Omega knows I’m smart enough to two box when I see that the #’s match up. So the # will be composite, thereby fulfilling his goal of predicting my actions and responding appropriately. The Lottery Bank doesn’t care about my decisions.
So I’ll two box.. I’ll do so, and would predictably do so. So Omega has selected a composite #, and I receive 1000$ from the boxes. Since its the same # the lottery will give me 2 million dollars.
But in this example Omega has already selected 1033, a prime, which means Omega already knows you will one-box. If it were me, how I would switch from your optimal strategy to deterministically one-boxing before even knowing whether the number is a prime (I didn’t know until I looked up; I’m speaking from the morgue right now) is beyond me.
I guess the lesson in this exercise is the same general argument Eliezer has made against lotteries: don’t bet on outcomes you don’t control.
Immediate thoughts, before reading comments: One-box. I had started to think more deeply until I read the part about being run over for factoring, and for some reason my brain applied it to reasoning about this topic as a whole and spit out a final answer.
Intuitively, it seemed one boxing would get me a million, as per standard Newcomb. The lottery two million seemed like gravy above that (diminishing marginal utility of money), with a potential for 3 million total. Since they’re independent, the word “separately” and its description made it seem like the lottery was unable to be affected by my actions at all. Thus, take box B, and hope for a lottery win. Definitely don’t over think it, or risk a trolley encounter.
Posting before checking the comments.
If I take only box B I will either make 1M$ or 2M$. Omega, with its 99,9% accuracy, will likely have selected a prime number. Expected utility is 0.999 1M + 0.001 2M $ = 1M+1K $.
If I take both, I will either get 1M+1K $ or 2M +1K $. Already I’m grabbing both boxes, because the expected utility is clearly higher. Omega would likely have selected a composite number. Expected utility is therefore 0.999 2001 K + 0.001 1001 K = 2M $.
In cases where the Lottery number doesn’t match Omega’s, I have a number of general strategies available, most of which might get me hit by the trolley depending on how it defines factoring. Does checking whether the ones digit is even or the number 5 count? Does summing the digits (and then the digits of the sum recursively if needed) and checking if the result is 3, 6 or 9 count? Using these two strategies would improve my odds significantly, but risks the wrath of the trolley.
2-box on one-off [Edit: on rereading, this comes off as more confident than I intended. This was what I thought I would do in the 2 minutes, which in retrospect were spent unproductively], but the one-off nature of the problem is modified by the third paragraph, which means that strategy might mean one-boxing previously got me no money. I would interpret that as less-than-perfect accuracy (which might be the source of the 99.9% probability) How does Omega deal with mixed strategies?
In normal Newcomb, I believe the standard treatment is that e leaves the black box empty in those cases. So, in this problem, I guess e would unconditionally select a composite number for eir box. With that specification, (two-boxing unconditionally) weakly dominates any unconditional mixed strategy, both in original Newcomb and this problem.
I did not interpret paragraph 3 to contain any information about prior payouts… For instance, if one were to 1-box (successfully!) in every case that did not have such a lottery hedge, it would appear consistent with the problem statement to me.
it tells you that there were prior payouts.
Ah, well, it tells us that there were prior games.
If I didn’t get prior payouts from those games, the updates on that is way bigger than any other reasoning such as what we’re doing on this thread.
I expect to maintain control over Omega(today, my decision), even after learning Omega(today, my decision) = Lottery(today). I take both boxes for ~$2,001,000.
Since I don’ t know the definition of the process generating the lottery number, my prior distribution over the primality of today’s lottery number (before learning that Omega’s number in Box B and the Lottery Number are equal) would be taken from the distribution of primes (1/log(1033) ~ 0.332). I don’t know why my intuition says to throw that number out after I condition on the equality. I feel less certain that I should throw away my prior on the primality of the lottery number if that prior is higher. If I start out with P(prime?(Lottery())) =1, and then throw that away because I think I maintain control of the number’s primality, that seems almost as incoherent as if I gave different distributions for the primality of equal numbers generated two different ways.
I don’t think I understand the problem. While reading the Numerical Lottery (NL) paragraph, I decided choosing only box B was the obvious answer, which led me to think I’ve misunderstood something.
In box B is $1,000, a composite number. If I pick a composite number, the NL gifts me $2 million it-doesn’t-really-matter-what-currency
Oh, I misread the problem. In box A is $1,000. Well, now I think I’ve understood the problem, and chosen the right answer for mistaken reasons.
If the NL pays “[me] $2 million if it has selected a composite number, and otherwise [...] $0,”* and the number it has selected goes in box B, then regardless of the number shown I can only profit from the NL by choosing box B. I’m guaranteed a profit by signalling to Omega that I will only choose box B.
Even in a scenario where box B contains the lesser amount, ‘tis still the most rational choice, considering I can apparently play the game an infinite number of times—or at least three thousand and one times. Considering that I will always choose box B when reasoning from the provided information (unless I’m still not understanding something), by this point I have at least $3,001,000,000 dollars; if I ever choose otherwise, Omega will no longer have reason to predict I will only one-box, and I lose my guarantee. The word ‘Ultimate’ makes me think I’m drastically wrong.
* Emphasis added.
Written before reading comments:
This is not a well formed problem. If my strategy is to 1-box iff the numbers match, then omega, choosing his number independently of the lottery number must choose a composite number, since I will 2-box 99.9% of the time. Therefore if omega is correct 99.9% of the time when the numbers match, then the lottery number must be composite at least 99.9% of the time.
However, if my strategy is to 2-box iff the numbers match, then omega, choosing his number independently of the lottery number must choose a prime number, since I will 1-box 99.9% of the time. Therefore if omega is correct 99.9% of the time when the numbers match, then the lottery number must be prime at least 99.9% of the time.
I don’t know the odds of the lottery, but it cannot be both prime at least 99.9% of the time and composite at least 99.9%, so one of these two strategies will consistently make omega when the boxes match.
I realized this is slightly wrong as written, because Omega doesn’t have to be correct 100% of the time when they don’t match, so he could do a little better using a random algorithm, but this just means that some of the 99.9%s need to be replaced with 99.8%.