A Decision Problem
The idea for this problem is gotten from dmytryl.
Omega makes a simulation of you. One of you is presented with an offer Omega offers them $1000.
1. If the simulation is offered the $1000 dollars and rejects it, the real you gets a $10,000.
2. If the simulation is offered the $1,000 dollars and accepts it, the real you gets $100.
3. If the real you is offered $1000 and accepts it, the real you gets $1000.
4. If the real you is offered $1000 and rejects it, the real you gets $0.
Immeidately after completion of the decision problem, the simulation is terminated.
The probability of selecting simulation or real you by Omega is not known. (OMega may always select one option, select both options with equal probability, or select options with any valid probabilities).
You find yourself in the game, with the rules explained as such to you. You don’t know if you’re the simulation or real, do you accept the $1000 or reject it?
The payoffs only need be of the form: 1. $k*X (k: 1 < k) (X: 1 < X)
2. $X/k
3. $X
4. $0
If $1000 is irrelevant to you, then substitute for any enticing value of X, and replace $X with X utils. There is no diminishing returns on the utility you gain from the reward Omega gives you.
Do you have a strategy for a general form of this problem?
You don’t need to invoke simulation or cloning. This is exactly equivalent to: Omega will play one of two games, you won’t know which and you don’t know the distribution. Both consist of offering you a yes/no choice. If: Game A and you take it, you get $100. Game A and you reject it, you get $10000. Game B and you take it, you get $1000 Game B and you reject it, you get $0.
The strategy consists of assigning a probability of which game you’re in and doing the math. No reason is given that would lead you toward any probability, so I’d probably choose “Omega is an ass and probably prefers game B because it makes players feel bad”, and take the $1000 or $100.
In a few days, I’ll upload a paper I’m currently working on. We’ll discuss the efficacy of your solution then. Is it really rational?
It’s instrumentally rational if you value money linearly in this range and assign less than about 9% probability that Omega is playing game A. There’s not enough information to determine if that probability assignment is rational.
But remember this about Omega: sure as you stand there you’re going to wind up with an ear full of cider.
I also agree with Dagon’s first paragraph. Then, since I don’t know which game Omega is playing except that either is possible, I will assign 0.5 probability to each game, calculate expected utilities (reject → $5000, accept → $550) and reject.
For general form I will reject if k > 1/k + 1, which is the same as k*k—k − 1 > 0 or k > (1+sqrt(5))/2. Otherwise i will accept.
It seems like I’m missing something, though, because it’s not clear why you chose these payoffs and not the ones that give some kind of nice answer.
Either is possible and no mention is made of how it’s chosen (in fact, it’s explicitly stated that the probability is not known), so why would you assign 50% rather than 0% to the chance of game A? If Omega mentioned a few irrelevant options (games C through K) which favored reject, but which it NEVER used (but you don’t know that), would you change your acceptance?
There’s no good reason for assigning 50% probability to game A but neither is there a good reason to assign any other probability. I guess I can say that i’m using something like “fair Omega prior” that assumes that Omega is not trying to trick me.
You and Gurkenglas seem to assume that Omega would try to minimize your reward. What is the reason for that?
Base rate pessimism and TANSTAAFL. Offers of free money are almost always tricks, so my prior is that the next offer is also a trick. I expect not to be paid at all, so choosing the option that’s clearly a violation if I’m not paid is a much clearer cheat than choosing the one where Omega can claim to play by the rules and not pay me.
If you state that I don’t know a probability, I have to use other assumptions. 50⁄50 is a lazy assumption.
Note: this boils down to “where do you get your priors?”, which is unsolved in Bayesean rationality.
What can I say, your prior does make sense in the real world. Mine was based on the other problems featuring Omega (Newcomb’s problem and Counterfactual mugging) where apart from messing with your intuitions Omega was not playing any dirty tricks.
I think this is a different guy named Omega. No mention of prediction or causality tricks, which are the hallmarks of Newcomb’s problem.
You could also make a version where you don’t know what X is. In this case always reject strategy doesn’t work since you would reject
k*X
in real life after the simulation rejected X. It seems like if you must precommit to one choice, you would have to accept (and get(X+X/k)/2
on average) but if you have a source of randomness, you could try to reject your cake and eat it too. If you accept with probability p and reject with probability1 - p
, your expected utility would be(p*X + (1-p)*p*k*X + p*p*X/k)/2
. If you know the value of k, you can calculate the best p and see if random strategy is better than always-accept. I’m still not sure where this is going though.I agree with Dagon’s first paragraph.
Could Omega’s decision of which game to play depend on the algorithm I submit as an answer? One convenient ruling might be that if Omega tried to predict whether I would accept, that would count as the simulation accepting/rejecting and it would have to pay out >=100$.
One approach is worst-case analysis as employed in computer science—assume that Omega wants to minimize our reward, then choose the strategy that maximizes it. Here, that means always accepting because that never yields less than 100$.
If I had a random number oracle that Omega couldn’t predict, I could accept 10000/10900 of the time, because that always yields an expected reward of 100000/109$, but since Omega can simulate the world this is unlikely.
Some interesting situations may have it be able to predict random number generators within its simulations, but not in the real world...