Free will is a controversial, confusing term that, I suspect, different people take to mean different things. I think to most readers (including me) it is unclear what exactly the Case 1 versus 2 distinction means. (What physical property of the world differs between the two worlds? Maybe you mean not having free will to mean something very mundane, similar to how I don’t have free will about whether to fly to Venus tomorrow because it’s just not physically possible for me to fly to Venus, so I have to “decide” not to fly to Venus?)
I generally think that free will is not so relevant in Newcomb’s problem. It seems that whether there is some entity somewhere in the world that can predict what I’m doing shouldn’t make a difference for whether I have free will or not, at least if this entity isn’t revealing its predictions to me before I choose. (I think this is also the consensus on this forum and in the philosophy literature on Newcomb’s problem.)
>CDT believers only see the second decision. The key here is realising there are two decisions.
Free will aside, as far as I understand, your position is basically in line with what most causal decision theorists believe: You should two-box, but you should commit to one-boxing if you can do so before your brain is scanned. Is that right? (I can give some references to discussions of discussions of CDT and commitment if you’re interested.)
If so, how do you feel about the various arguments that people have made against CDT? For example, what would you do in the following scenario?
>Two boxes, B1 and B2, are on offer. You may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, Omega put $3 in each box that she predicted you would not acquire. Omega’s predictions are accurate with probability 0.75.
In this scenario, CDT always recommends buying a box, which seems like a bad idea because from the perspective of the seller of the boxes, they profit when you buy from them.
>TDT believers only see the first decision, [...] The key here is realising there are two decisions.
I think proponents of TDT and especially Updateless Decision Theory and friends are fully aware of this possible “two-decisions” perspective. (Though typically Newcomb’s problem is described as only having one of the two decision points, namely the second.) They propose that the correct way to make the second decision (after the brain scan) is to take the perspective of the first decision (or similar). Of course, one could debate whether this move is valid and this has been discussed (e.g., here, here, or here).
Also: Note that evidential decision theorists would argue that you should one-box in the second decision (after the brain scan) for reasons unrelated to the first-decision perspective. In fact, I think that most proponents of TDT/UDT/… would agree with this reasoning also, i.e., even if it weren’t for the “first decision” perspective, they’d still favor one-boxing. (To really get the first decision/second decision conflict you need cases like counterfactual mugging.)
Free will is a controversial, confusing term that, I suspect, different people take to mean different things.
Different definitions of free will require freedom from different things. Much of the debate centres on Libertarian free will, which requires freedom from causal determinism. Compatibilist free will requires freedom from deliberate restrictions by other free agents. Contra causal free will requires freedom from physics, on the assumption that physics is deterministic.
Libertarian free will is very much the definition that is relevant to Newcomb.If you could make an undetermined choice, in defiance of the oracles predictive abilities, you could get the extra money—but if you can make undetermined choices, how can the predictor predict you?
I agree that some notions of free will imply that Newcomb’s problem is impossible to set up. But if one of these notion is what is meant, then the premise of Newcomb’s problem is that these notions are false, right?
It also happens that I disagree with these notions as being relevant to what free will is.
Anyway, if this had been discussed in the original post, I wouldn’t have complained.
I agree that some notions of free will imply that Newcomb’s problem is impossible to set up. But if one of these notion is what is meant, then the premise of Newcomb’s problem is that these notions are false, right?
Free will is a controversial, confusing term that, I suspect, different people take to mean different things. I think to most readers (including me) it is unclear what exactly the Case 1 versus 2 distinction means. (What physical property of the world differs between the two worlds? Maybe you mean not having free will to mean something very mundane, similar to how I don’t have free will about whether to fly to class tomorrow?)
Free will for the purposes of this article refers to the decisions freely available to the agent. In a world with no free will, the agent has no capability to change the outcome of anything. This is the point of my article, as the answer to Newcomb’s paradox changes depending on which decisions we afford the agent. The distinction between the cases is that we are enumerating over the possible answers to the two following questions:
1) Can the agent decide their brainstate? Yes or No.
2) Can the agent decide the amount of boxes they choose independently of their brainstate? Yes or No.
This is why there are only 4 sub-cases of Case 2 to consider. I suppose you are right that case 1 is somewhat redundant, since it is covered by answering “no” to both of these questions.
Two boxes, B1 and B2, are on offer. You may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, Omega put $3 in each box that she predicted you would not acquire. Omega’s predictions are accurate with probability 0.75.
In this case, the wording of the problem seems to suggest that we implicitly assume (2)(or rather the equivalent statement for this scenario) is “No” and that the agent’s actions are dependent on their brainstate, modelled by a probability distribution. The reason that this assumption is implicit in the question is that if the answer to (2) is “Yes” then the agent could wilfully act against Omega’s prediction and break the 0.75 probability assumption which is stated in the premise.
Once again, if the answer to (1) is also “No” then the question is moot since we have no free will.
If the agent can decide their brainstate but we don’t let them decide the number of boxes purchased independently of that brainstate:
Choice 1: Choose “no-boxes purchased” brainstate.
Omega puts $3 in both boxes.
You have a 0.75 probability to buy no boxes, and 0.25 probability to buy some boxes. The question doesn’t actually specify how that 0.25 is distributed between buying 1-box and 2-boxes, so let’s say it’s just 0.08333 for the other 3 possibilities each.
Omega puts 3$ in the box you didn’t plan on purchasing. You have a 0.75 probability of buying only that box, but again doesn’t specify how the rest of the 0.25 is distributed. Assuming 0.125 each:
Omega puts no money in the boxes. You have a 0.75 probability of buying both boxes. Assuming again that the 0.25 is distributed evenly amoung the other possibilities.
In this case they should choose the “no-boxes purchased” brainstate.
If we were to break the premise of the question, and allow the agent to be able to choose their brainstate, and also choose the following action independently from that brain state, the optimal decision would be (no-boxes brainstate) → (buy both boxes)
I think this question is pretty much analogus to the original version of Newcomb’s problem, just with an extra layer of probability that complicates the calculations, but doesn’t provide any more insight. It’s still the same trickery where the apparent paradox emerges because it’s not immediately obvious that there are secretly 2 decisions being made, and the question is ambiguous because it’s not clear which decisions are actually afforded to the agent.
Free will is a controversial, confusing term that, I suspect, different people take to mean different things. I think to most readers (including me) it is unclear what exactly the Case 1 versus 2 distinction means. (What physical property of the world differs between the two worlds? Maybe you mean not having free will to mean something very mundane, similar to how I don’t have free will about whether to fly to Venus tomorrow because it’s just not physically possible for me to fly to Venus, so I have to “decide” not to fly to Venus?)
I generally think that free will is not so relevant in Newcomb’s problem. It seems that whether there is some entity somewhere in the world that can predict what I’m doing shouldn’t make a difference for whether I have free will or not, at least if this entity isn’t revealing its predictions to me before I choose. (I think this is also the consensus on this forum and in the philosophy literature on Newcomb’s problem.)
>CDT believers only see the second decision. The key here is realising there are two decisions.
Free will aside, as far as I understand, your position is basically in line with what most causal decision theorists believe: You should two-box, but you should commit to one-boxing if you can do so before your brain is scanned. Is that right? (I can give some references to discussions of discussions of CDT and commitment if you’re interested.)
If so, how do you feel about the various arguments that people have made against CDT? For example, what would you do in the following scenario?
>Two boxes, B1 and B2, are on offer. You may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, Omega put $3 in each box that she predicted you would not acquire. Omega’s predictions are accurate with probability 0.75.
In this scenario, CDT always recommends buying a box, which seems like a bad idea because from the perspective of the seller of the boxes, they profit when you buy from them.
>TDT believers only see the first decision, [...] The key here is realising there are two decisions.
I think proponents of TDT and especially Updateless Decision Theory and friends are fully aware of this possible “two-decisions” perspective. (Though typically Newcomb’s problem is described as only having one of the two decision points, namely the second.) They propose that the correct way to make the second decision (after the brain scan) is to take the perspective of the first decision (or similar). Of course, one could debate whether this move is valid and this has been discussed (e.g., here, here, or here).
Also: Note that evidential decision theorists would argue that you should one-box in the second decision (after the brain scan) for reasons unrelated to the first-decision perspective. In fact, I think that most proponents of TDT/UDT/… would agree with this reasoning also, i.e., even if it weren’t for the “first decision” perspective, they’d still favor one-boxing. (To really get the first decision/second decision conflict you need cases like counterfactual mugging.)
Different definitions of free will require freedom from different things. Much of the debate centres on Libertarian free will, which requires freedom from causal determinism. Compatibilist free will requires freedom from deliberate restrictions by other free agents. Contra causal free will requires freedom from physics, on the assumption that physics is deterministic.
Libertarian free will is very much the definition that is relevant to Newcomb.If you could make an undetermined choice, in defiance of the oracles predictive abilities, you could get the extra money—but if you can make undetermined choices, how can the predictor predict you?
I agree that some notions of free will imply that Newcomb’s problem is impossible to set up. But if one of these notion is what is meant, then the premise of Newcomb’s problem is that these notions are false, right?
It also happens that I disagree with these notions as being relevant to what free will is.
Anyway, if this had been discussed in the original post, I wouldn’t have complained.
It’s advertised as being a paradox, not a proof.
Free will for the purposes of this article refers to the decisions freely available to the agent. In a world with no free will, the agent has no capability to change the outcome of anything. This is the point of my article, as the answer to Newcomb’s paradox changes depending on which decisions we afford the agent. The distinction between the cases is that we are enumerating over the possible answers to the two following questions:
1) Can the agent decide their brainstate? Yes or No.
2) Can the agent decide the amount of boxes they choose independently of their brainstate? Yes or No.
This is why there are only 4 sub-cases of Case 2 to consider. I suppose you are right that case 1 is somewhat redundant, since it is covered by answering “no” to both of these questions.
In this case, the wording of the problem seems to suggest that we implicitly assume (2)(or rather the equivalent statement for this scenario) is “No” and that the agent’s actions are dependent on their brainstate, modelled by a probability distribution. The reason that this assumption is implicit in the question is that if the answer to (2) is “Yes” then the agent could wilfully act against Omega’s prediction and break the 0.75 probability assumption which is stated in the premise.
Once again, if the answer to (1) is also “No” then the question is moot since we have no free will.
If the agent can decide their brainstate but we don’t let them decide the number of boxes purchased independently of that brainstate:
Choice 1: Choose “no-boxes purchased” brainstate.
Omega puts $3 in both boxes.
You have a 0.75 probability to buy no boxes, and 0.25 probability to buy some boxes. The question doesn’t actually specify how that 0.25 is distributed between buying 1-box and 2-boxes, so let’s say it’s just 0.08333 for the other 3 possibilities each.
Expected value:
0$ * 0.75 + 2$ * 0.0833 + 2$ * 0.0833 + 4$ * 0.0833 = 0.666$
Choice 2: Choose “1-box purchased” brainstate.
Omega puts 3$ in the box you didn’t plan on purchasing. You have a 0.75 probability of buying only that box, but again doesn’t specify how the rest of the 0.25 is distributed. Assuming 0.125 each:
Expected value:
0$ * 0.0833 - 1$ * 0.75 + 2$ * 0.0833 + 1$ * 0.0833 = -0.5$
Choice 3: Choose “2-box purchased” brainstate.
Omega puts no money in the boxes. You have a 0.75 probability of buying both boxes. Assuming again that the 0.25 is distributed evenly amoung the other possibilities.
Expected value:
0$ * 0.0833 - 1$ * 0.0833 - 1$ * 0.0833 - 2$ * 0.75 = -1.6666$
In this case they should choose the “no-boxes purchased” brainstate.
If we were to break the premise of the question, and allow the agent to be able to choose their brainstate, and also choose the following action independently from that brain state, the optimal decision would be (no-boxes brainstate) → (buy both boxes)
I think this question is pretty much analogus to the original version of Newcomb’s problem, just with an extra layer of probability that complicates the calculations, but doesn’t provide any more insight. It’s still the same trickery where the apparent paradox emerges because it’s not immediately obvious that there are secretly 2 decisions being made, and the question is ambiguous because it’s not clear which decisions are actually afforded to the agent.