There’s nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can’t two-box if Omega decided you would one-box. Any naive application will do that because all standard theories assume causality, which is broken in this problem. Before applying decision theories we must work out what causes what. My original post was an attempt to do just that.
There’s nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can’t two-box if Omega decided you would one-box.
The decision is yours, Omega only foresees it. See also: Thou Art Physics.
Any naive application will do that because the problem statement is contradictory on the surface. Before applying decision theories, the contradiction has to be resolved somehow as we work out what causes what. My original post was an attempt to do just that.
Do that for the standard setting that I outlined above, instead of constructing its broken variations. What it means for something to cause something else, and how one should go about describing the situations in that model should arguably be a part of any decision theory.
the problem statement … explicitly says you can’t two-box if Omega decided you would one-box.
The decision is yours, Omega only foresees it.
These stop contradicting each other if you rephrase a little more precisely. It’s not that you can’t two-box if Omega decided you would one-box—you just don’t, because in order for Omega to have decided that, you must have also decided that. Or rather, been going to decide that—and if I understand the post you linked correctly, its point is that the difference between “my decision” and “the predetermination of my decision” is not meaningful.
As far as I can tell—and I’m new to this topic, so please forgive me if this is a juvenile observation—the flaw in the problem is that it cannot be true both that the contents of the boxes are determined by your choice (via Omega’s prediction), and that the contents have already been determined when you are making your choice. The argument for one-boxing assumes that, of those contradictory premises, the first one is true. The argument for two-boxing assumes that the second one is true.
The potential flaw in my description, in turn, is whether my simplification just now (“determined by your choice via Omega”) is actually equivalent to the way it’s put in the problem (“determined by Omega based on a prediction of you”). I think it is, for the reasons given above, but what do I know?
(I feel comfortable enough with this explanation that I’m quite confident I must be missing something.)
An aspiring Bayesian rationalist would behave like me in the original post: assume some prior over the possible implementations of Omega and work out what to do. So taboo “foresee” and propose some mechanisms as I, ciphergoth and Toby Ord did.
There’s nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can’t two-box if Omega decided you would one-box. Any naive application will do that because all standard theories assume causality, which is broken in this problem. Before applying decision theories we must work out what causes what. My original post was an attempt to do just that.
What other cases?
The decision is yours, Omega only foresees it. See also: Thou Art Physics.
Do that for the standard setting that I outlined above, instead of constructing its broken variations. What it means for something to cause something else, and how one should go about describing the situations in that model should arguably be a part of any decision theory.
These stop contradicting each other if you rephrase a little more precisely. It’s not that you can’t two-box if Omega decided you would one-box—you just don’t, because in order for Omega to have decided that, you must have also decided that. Or rather, been going to decide that—and if I understand the post you linked correctly, its point is that the difference between “my decision” and “the predetermination of my decision” is not meaningful.
As far as I can tell—and I’m new to this topic, so please forgive me if this is a juvenile observation—the flaw in the problem is that it cannot be true both that the contents of the boxes are determined by your choice (via Omega’s prediction), and that the contents have already been determined when you are making your choice. The argument for one-boxing assumes that, of those contradictory premises, the first one is true. The argument for two-boxing assumes that the second one is true.
The potential flaw in my description, in turn, is whether my simplification just now (“determined by your choice via Omega”) is actually equivalent to the way it’s put in the problem (“determined by Omega based on a prediction of you”). I think it is, for the reasons given above, but what do I know?
(I feel comfortable enough with this explanation that I’m quite confident I must be missing something.)
An aspiring Bayesian rationalist would behave like me in the original post: assume some prior over the possible implementations of Omega and work out what to do. So taboo “foresee” and propose some mechanisms as I, ciphergoth and Toby Ord did.