TAG said that Libertarian Free Will is the relevant one for Newcomb’s problem. I think this is true. However, I strongly suspect that most people who write about decision theory, at least on LW, agree that LFW doesn’t exist. So arguably almost the entire problem is about analyzing Newcomb’s problem in a world without LFW. (And ofc, a big part of the work is not to decide which action is better, but to formalize procedures that output that action.)
This is why differentiating different forms of Free Will and calling that a “Complete Solution” is dubious. It seems to miss the hard part of the problem entirely. (And the hard problem has arguably been solved anyway with Functional/Updateless Decision Theory. Not in the sense that there are no open problems, but that they don’t involve Newcomb’s problem.)
Personally I don’t believe that the problem is actually hard. None of the individual cases are hard, the decsions are pretty simple once we go into each case. Rather I think this question is more of a philosophical bamboozle that is actually more of a question about human capabilities, and disguises itself as a problem about decision theory.
As I talk about in the post, the answer to the question changes depending on which decisions we afford the agent. Once that has been determined, it is just a matter of using min-max to find the optimal decision process. So people’s disagreements aren’t actually about decision theory, rather just disagreements about which choices are available to us.
If you allow the agent to decide their brainstate and also act independently from it, then it is easy to see the best solution is (1-box brainstate) → (Take 2 boxes). People who say “that’s not possible because omega is a perfect predictor” are not actually disagreeing about the decision theory, rather it’s just disagreeing about if humans are capable of doing that.
Well the decision theory is only applied after assessing the available options, so it won’t tell you to do things you aren’t capable of doing.
I suppose the bamboozle here is that it is that it seems like a DT question, but, as you point out, it’s actually a question about physics. However, even in this thread, people dismiss any questions about that as being “not important”, and instead try to focus on the decision theory, which isn’t actually relevant here.
For example:
I generally think that free will is not so relevant in Newcomb’s problem. It seems that whether there is some entity somewhere in the world that can predict what I’m doing shouldn’t make a difference for whether I have free will or not, at least if this entity isn’t revealing its predictions to me before I choose.
I strongly suspect that most people who write about decision theory, at least on LW, agree that LFW doesn’t exist. So arguably almost the entire problem is about analyzing Newcomb’s problem in a world without LFW.
“Is believed to be”, not “is”.
If you believe in determinism, that’s belief, not knowledge, so.It doesn’t solve anything.
This is why differentiating different forms of Free Will and calling that a “Complete Solution” is dubious.
It’s equally dubious to ignore the free will component entirely.
The paradox is a paradox because it implies some kind of choice , but also some kind of determinism via the predictors ability to predict.
TAG said that Libertarian Free Will is the relevant one for Newcomb’s problem. I think this is true. However, I strongly suspect that most people who write about decision theory, at least on LW, agree that LFW doesn’t exist. So arguably almost the entire problem is about analyzing Newcomb’s problem in a world without LFW. (And ofc, a big part of the work is not to decide which action is better, but to formalize procedures that output that action.)
This is why differentiating different forms of Free Will and calling that a “Complete Solution” is dubious. It seems to miss the hard part of the problem entirely. (And the hard problem has arguably been solved anyway with Functional/Updateless Decision Theory. Not in the sense that there are no open problems, but that they don’t involve Newcomb’s problem.)
Personally I don’t believe that the problem is actually hard. None of the individual cases are hard, the decsions are pretty simple once we go into each case. Rather I think this question is more of a philosophical bamboozle that is actually more of a question about human capabilities, and disguises itself as a problem about decision theory.
As I talk about in the post, the answer to the question changes depending on which decisions we afford the agent. Once that has been determined, it is just a matter of using min-max to find the optimal decision process. So people’s disagreements aren’t actually about decision theory, rather just disagreements about which choices are available to us.
If you allow the agent to decide their brainstate and also act independently from it, then it is easy to see the best solution is (1-box brainstate) → (Take 2 boxes). People who say “that’s not possible because omega is a perfect predictor” are not actually disagreeing about the decision theory, rather it’s just disagreeing about if humans are capable of doing that.
What use is a decision theory that tells you to do things you can’t do?
Maybe the paradox is bamboozling, maybe it’s deconfusing—revealing that DT depends on physics and metaphysics, not just maths.
Well the decision theory is only applied after assessing the available options, so it won’t tell you to do things you aren’t capable of doing.
I suppose the bamboozle here is that it is that it seems like a DT question, but, as you point out, it’s actually a question about physics. However, even in this thread, people dismiss any questions about that as being “not important”, and instead try to focus on the decision theory, which isn’t actually relevant here.
For example:
“Is believed to be”, not “is”.
If you believe in determinism, that’s belief, not knowledge, so.It doesn’t solve anything.
It’s equally dubious to ignore the free will component entirely.
The paradox is a paradox because it implies some kind of choice , but also some kind of determinism via the predictors ability to predict.