So what you’re saying is that the only reason this problem is a problem is because the problem hasn’t been defined narrowly enough. You don’t know what Omega is capable of, so you don’t know which choice to make. So there is no way to logically solve the problem (with the goal of maximizing utility) without additional information.
Here’s what I’d do: I’d pick up B, open it, and take A iff I found it empty. That way, Omega’s decision of what to put in the box would have to incorporate the variable of what Omega put in the box, causing an infinite regress which will use all cpu cycles until the process is terminated. Although that’ll probably result in the AI picking an easier victim to torment and not even giving me a measly thousand dollars.
Okay… so since you already know, in advance of getting the boxes, that that’s what you’d know, Omega can deduce that. So you open Box B, find it empty, and then take Box A. Enjoy your $1000. Omega doesn’t need to infinite loop that one; he knows that you’re the kind of person who’d try for Box A too.
No, putting $1 million in box B works to. Origin64 opens box B, takes the money, and doesn’t take box A. It’s like “This sentence is true.”—whatever Omega does makes the prediction valid.
Not how Omega looks at it. By definition, Omega looks ahead, sees a branch in which you would go for Box A, and puts nothing in Box B. There’s no cheating Omega… just like you can’t think “I’m going to one-box, but then open Box A after I’ve pocketed the million” there’s no “I’m going to open Box B first, and decide whether or not to open Box A afterward”. Unless Omega is quite sure that you have precommitted to never opening Box A ever, Box B contains nothing; the strategy of leaving Box A as a possibility if Box B doesn’t pan out is a two-box strategy, and Omega doesn’t allow it.
Unless Omega is quite sure that you have precommitted to never opening Box A ever
Well, this isn’t quite true. What Omega cares about is whether you will open Box A. From Omega’s perspective it makes no difference whether you’ve precommitted to never opening it, or whether you’ve made no such precommitment but it turns out you won’t open it for other reasons.
Assuming that Omega’s “prediction” is in good faith, and that we can’t “break” him as a predictor as a side effect of exploiting casuality loops etc. in order to win.
I’m not sure I understood that, but if I did, then yes, assuming that Omega is as described in the thought experiment. Of course, if Omega has other properties (for example, is an unreliable predictor) other things follow.
Which means you might end up with either amount of money, since you don’t really know enough about Omega , instead of just the one box winnings. So you should still just one box?
If you look in box B before deciding whether to choose box A, then you can force Omega to be wrong. That sounds like so much fun that I might choose it over the $1000.
So what you’re saying is that the only reason this problem is a problem is because the problem hasn’t been defined narrowly enough. You don’t know what Omega is capable of, so you don’t know which choice to make. So there is no way to logically solve the problem (with the goal of maximizing utility) without additional information.
Here’s what I’d do: I’d pick up B, open it, and take A iff I found it empty. That way, Omega’s decision of what to put in the box would have to incorporate the variable of what Omega put in the box, causing an infinite regress which will use all cpu cycles until the process is terminated. Although that’ll probably result in the AI picking an easier victim to torment and not even giving me a measly thousand dollars.
Okay… so since you already know, in advance of getting the boxes, that that’s what you’d know, Omega can deduce that. So you open Box B, find it empty, and then take Box A. Enjoy your $1000. Omega doesn’t need to infinite loop that one; he knows that you’re the kind of person who’d try for Box A too.
No, putting $1 million in box B works to. Origin64 opens box B, takes the money, and doesn’t take box A. It’s like “This sentence is true.”—whatever Omega does makes the prediction valid.
Not how Omega looks at it. By definition, Omega looks ahead, sees a branch in which you would go for Box A, and puts nothing in Box B. There’s no cheating Omega… just like you can’t think “I’m going to one-box, but then open Box A after I’ve pocketed the million” there’s no “I’m going to open Box B first, and decide whether or not to open Box A afterward”. Unless Omega is quite sure that you have precommitted to never opening Box A ever, Box B contains nothing; the strategy of leaving Box A as a possibility if Box B doesn’t pan out is a two-box strategy, and Omega doesn’t allow it.
Well, this isn’t quite true. What Omega cares about is whether you will open Box A. From Omega’s perspective it makes no difference whether you’ve precommitted to never opening it, or whether you’ve made no such precommitment but it turns out you won’t open it for other reasons.
Assuming that Omega’s “prediction” is in good faith, and that we can’t “break” him as a predictor as a side effect of exploiting casuality loops etc. in order to win.
I’m not sure I understood that, but if I did, then yes, assuming that Omega is as described in the thought experiment. Of course, if Omega has other properties (for example, is an unreliable predictor) other things follow.
Which means you might end up with either amount of money, since you don’t really know enough about Omega , instead of just the one box winnings. So you should still just one box?
If you look in box B before deciding whether to choose box A, then you can force Omega to be wrong. That sounds like so much fun that I might choose it over the $1000.