Personally I don’t believe that the problem is actually hard. None of the individual cases are hard, the decsions are pretty simple once we go into each case. Rather I think this question is more of a philosophical bamboozle that is actually more of a question about human capabilities, and disguises itself as a problem about decision theory.
As I talk about in the post, the answer to the question changes depending on which decisions we afford the agent. Once that has been determined, it is just a matter of using min-max to find the optimal decision process. So people’s disagreements aren’t actually about decision theory, rather just disagreements about which choices are available to us.
If you allow the agent to decide their brainstate and also act independently from it, then it is easy to see the best solution is (1-box brainstate) → (Take 2 boxes). People who say “that’s not possible because omega is a perfect predictor” are not actually disagreeing about the decision theory, rather it’s just disagreeing about if humans are capable of doing that.
Well the decision theory is only applied after assessing the available options, so it won’t tell you to do things you aren’t capable of doing.
I suppose the bamboozle here is that it is that it seems like a DT question, but, as you point out, it’s actually a question about physics. However, even in this thread, people dismiss any questions about that as being “not important”, and instead try to focus on the decision theory, which isn’t actually relevant here.
For example:
I generally think that free will is not so relevant in Newcomb’s problem. It seems that whether there is some entity somewhere in the world that can predict what I’m doing shouldn’t make a difference for whether I have free will or not, at least if this entity isn’t revealing its predictions to me before I choose.
Personally I don’t believe that the problem is actually hard. None of the individual cases are hard, the decsions are pretty simple once we go into each case. Rather I think this question is more of a philosophical bamboozle that is actually more of a question about human capabilities, and disguises itself as a problem about decision theory.
As I talk about in the post, the answer to the question changes depending on which decisions we afford the agent. Once that has been determined, it is just a matter of using min-max to find the optimal decision process. So people’s disagreements aren’t actually about decision theory, rather just disagreements about which choices are available to us.
If you allow the agent to decide their brainstate and also act independently from it, then it is easy to see the best solution is (1-box brainstate) → (Take 2 boxes). People who say “that’s not possible because omega is a perfect predictor” are not actually disagreeing about the decision theory, rather it’s just disagreeing about if humans are capable of doing that.
What use is a decision theory that tells you to do things you can’t do?
Maybe the paradox is bamboozling, maybe it’s deconfusing—revealing that DT depends on physics and metaphysics, not just maths.
Well the decision theory is only applied after assessing the available options, so it won’t tell you to do things you aren’t capable of doing.
I suppose the bamboozle here is that it is that it seems like a DT question, but, as you point out, it’s actually a question about physics. However, even in this thread, people dismiss any questions about that as being “not important”, and instead try to focus on the decision theory, which isn’t actually relevant here.
For example: