I was mostly just having fun, and find almost every new problem I see fun. I figured others might like it. You don’t—so be it.
The range is a thousand numbers btw, it includes 1 and 1000, but whatever.
I don’t see how precommitting to one thing and then doing the other, thereby fooling Omega is possible. In problem 1, one-boxing is the rational choice.
[ epistimic status: commenting for fun, not seriously objecting. I like these posts, even if I don’t see how they further our understanding of decisions ]
The range is a thousand numbers btw, it includes 1 and 1000 ... larger than 1 and smaller than or equal to 1000.
We’re both wrong. It includes 1000 but not 1. Agreed with the “whatever” :)
I don’t see how precommitting to one thing and then doing the other, thereby fooling Omega is possible
That’s the problem with underspecified thought experiments. I don’t see how Omega’s prediction is possible. The reasons for 99% accuracy matter a lot. If she just kills people if they’re about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right. If she’s only tried it on idiots who think their precommitment is binding, and yours isn’t, then tricking her is right in 1 and still publicly two-box in 2.
BTW, I think you typo’d your description of one- and two-boxing. Traditionally, it’s “take box B or take both”, but you write “take box A or take both”.
I think that, by definition, if you precommitted to something you have to do it. A “nonbinding precommitment” isn’t a precommitment despite the grammatical structure of that phrase, just like a “squashed circle” isn’t a circle.
(I do separately think Omega is impossible. Predicting someone’s actions in full generality, when they’re reacting to one of your own actions, implicates the Halting Problem.)
Yeah, I should have used more words. The “publicly state and behave as if precommitted sufficient to make Omega predict you will one-box, but then actually two-box” is what I meant. “Fake precommit” may be better than “nonbinding precommit” as a descriptor.
And, as you say, I don’t believe Omega is possible in our current world. Which means the thought experiment is of limited validity, except as an exploration of decision theory and theoretical causality.
“Is impossible in our current world” carries the connotation of “contingently impossible”. If Omega is impossible because he can’t solve the Halting Problem, he’s necessarily impossible.
There may be prediction or undetectable coercion mechanisms that work on humans to make Omega able to predict/cause choices in such scenarios, which don’t require to a fully-general solution to halting.
[ epistimic status: commenting for fun, not seriously objecting. I like these posts, even if I don’t see how they further our understanding of decisions ]
Cool. I apologize if I came of a bit snarky earlier. Thanks for commenting! I read Eliezer’s post and was thinking about how to make a problem I like (even) more, and this was the result. Just for fun, mostly :)
We’re both wrong. It includes 1000 but not 1. Agreed with the “whatever” :)
Well, I defined the range. I can’t really be wrong, haha ;) But I get your point, with prime and composite, >=2 would make more sense.
That’s the problem with underspecified thought experiments. I don’t see how Omega’s prediction is possible. The reasons for 99% accuracy matter a lot. If she just kills people if they’re about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right. If she’s only tried it on idiots who think their precommitment is binding, and yours isn’t, then tricking her is right in 1 and still publicly two-box in 2.
The accuracy is something I need to learn more about at some point, but it should (I think) simply be read as “Whatever choice I make, there’s 0.99 probability Omega predicted it.”
BTW, I think you typo’d your description of one- and two-boxing. Traditionally, it’s “take box B or take both”, but you write “take box A or take both”.
I was mostly just having fun, and find almost every new problem I see fun. I figured others might like it. You don’t—so be it.
The range is a thousand numbers btw, it includes 1 and 1000, but whatever.
I don’t see how precommitting to one thing and then doing the other, thereby fooling Omega is possible. In problem 1, one-boxing is the rational choice.
[ epistimic status: commenting for fun, not seriously objecting. I like these posts, even if I don’t see how they further our understanding of decisions ]
We’re both wrong. It includes 1000 but not 1. Agreed with the “whatever” :)
That’s the problem with underspecified thought experiments. I don’t see how Omega’s prediction is possible. The reasons for 99% accuracy matter a lot. If she just kills people if they’re about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right. If she’s only tried it on idiots who think their precommitment is binding, and yours isn’t, then tricking her is right in 1 and still publicly two-box in 2.
BTW, I think you typo’d your description of one- and two-boxing. Traditionally, it’s “take box B or take both”, but you write “take box A or take both”.
I think that, by definition, if you precommitted to something you have to do it. A “nonbinding precommitment” isn’t a precommitment despite the grammatical structure of that phrase, just like a “squashed circle” isn’t a circle.
(I do separately think Omega is impossible. Predicting someone’s actions in full generality, when they’re reacting to one of your own actions, implicates the Halting Problem.)
Yeah, I should have used more words. The “publicly state and behave as if precommitted sufficient to make Omega predict you will one-box, but then actually two-box” is what I meant. “Fake precommit” may be better than “nonbinding precommit” as a descriptor.
And, as you say, I don’t believe Omega is possible in our current world. Which means the thought experiment is of limited validity, except as an exploration of decision theory and theoretical causality.
“Is impossible in our current world” carries the connotation of “contingently impossible”. If Omega is impossible because he can’t solve the Halting Problem, he’s necessarily impossible.
There may be prediction or undetectable coercion mechanisms that work on humans to make Omega able to predict/cause choices in such scenarios, which don’t require to a fully-general solution to halting.
Cool. I apologize if I came of a bit snarky earlier. Thanks for commenting! I read Eliezer’s post and was thinking about how to make a problem I like (even) more, and this was the result. Just for fun, mostly :)
Well, I defined the range. I can’t really be wrong, haha ;) But I get your point, with prime and composite, >=2 would make more sense.
The accuracy is something I need to learn more about at some point, but it should (I think) simply be read as “Whatever choice I make, there’s 0.99 probability Omega predicted it.”
Thanks Dagon! Fixing it.