Free will is a controversial, confusing term that, I suspect, different people take to mean different things. I think to most readers (including me) it is unclear what exactly the Case 1 versus 2 distinction means. (What physical property of the world differs between the two worlds? Maybe you mean not having free will to mean something very mundane, similar to how I don’t have free will about whether to fly to class tomorrow?)
Free will for the purposes of this article refers to the decisions freely available to the agent. In a world with no free will, the agent has no capability to change the outcome of anything. This is the point of my article, as the answer to Newcomb’s paradox changes depending on which decisions we afford the agent. The distinction between the cases is that we are enumerating over the possible answers to the two following questions:
1) Can the agent decide their brainstate? Yes or No.
2) Can the agent decide the amount of boxes they choose independently of their brainstate? Yes or No.
This is why there are only 4 sub-cases of Case 2 to consider. I suppose you are right that case 1 is somewhat redundant, since it is covered by answering “no” to both of these questions.
Two boxes, B1 and B2, are on offer. You may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, Omega put $3 in each box that she predicted you would not acquire. Omega’s predictions are accurate with probability 0.75.
In this case, the wording of the problem seems to suggest that we implicitly assume (2)(or rather the equivalent statement for this scenario) is “No” and that the agent’s actions are dependent on their brainstate, modelled by a probability distribution. The reason that this assumption is implicit in the question is that if the answer to (2) is “Yes” then the agent could wilfully act against Omega’s prediction and break the 0.75 probability assumption which is stated in the premise.
Once again, if the answer to (1) is also “No” then the question is moot since we have no free will.
If the agent can decide their brainstate but we don’t let them decide the number of boxes purchased independently of that brainstate:
Choice 1: Choose “no-boxes purchased” brainstate.
Omega puts $3 in both boxes.
You have a 0.75 probability to buy no boxes, and 0.25 probability to buy some boxes. The question doesn’t actually specify how that 0.25 is distributed between buying 1-box and 2-boxes, so let’s say it’s just 0.08333 for the other 3 possibilities each.
Omega puts 3$ in the box you didn’t plan on purchasing. You have a 0.75 probability of buying only that box, but again doesn’t specify how the rest of the 0.25 is distributed. Assuming 0.125 each:
Omega puts no money in the boxes. You have a 0.75 probability of buying both boxes. Assuming again that the 0.25 is distributed evenly amoung the other possibilities.
In this case they should choose the “no-boxes purchased” brainstate.
If we were to break the premise of the question, and allow the agent to be able to choose their brainstate, and also choose the following action independently from that brain state, the optimal decision would be (no-boxes brainstate) → (buy both boxes)
I think this question is pretty much analogus to the original version of Newcomb’s problem, just with an extra layer of probability that complicates the calculations, but doesn’t provide any more insight. It’s still the same trickery where the apparent paradox emerges because it’s not immediately obvious that there are secretly 2 decisions being made, and the question is ambiguous because it’s not clear which decisions are actually afforded to the agent.
Free will for the purposes of this article refers to the decisions freely available to the agent. In a world with no free will, the agent has no capability to change the outcome of anything. This is the point of my article, as the answer to Newcomb’s paradox changes depending on which decisions we afford the agent. The distinction between the cases is that we are enumerating over the possible answers to the two following questions:
1) Can the agent decide their brainstate? Yes or No.
2) Can the agent decide the amount of boxes they choose independently of their brainstate? Yes or No.
This is why there are only 4 sub-cases of Case 2 to consider. I suppose you are right that case 1 is somewhat redundant, since it is covered by answering “no” to both of these questions.
In this case, the wording of the problem seems to suggest that we implicitly assume (2)(or rather the equivalent statement for this scenario) is “No” and that the agent’s actions are dependent on their brainstate, modelled by a probability distribution. The reason that this assumption is implicit in the question is that if the answer to (2) is “Yes” then the agent could wilfully act against Omega’s prediction and break the 0.75 probability assumption which is stated in the premise.
Once again, if the answer to (1) is also “No” then the question is moot since we have no free will.
If the agent can decide their brainstate but we don’t let them decide the number of boxes purchased independently of that brainstate:
Choice 1: Choose “no-boxes purchased” brainstate.
Omega puts $3 in both boxes.
You have a 0.75 probability to buy no boxes, and 0.25 probability to buy some boxes. The question doesn’t actually specify how that 0.25 is distributed between buying 1-box and 2-boxes, so let’s say it’s just 0.08333 for the other 3 possibilities each.
Expected value:
0$ * 0.75 + 2$ * 0.0833 + 2$ * 0.0833 + 4$ * 0.0833 = 0.666$
Choice 2: Choose “1-box purchased” brainstate.
Omega puts 3$ in the box you didn’t plan on purchasing. You have a 0.75 probability of buying only that box, but again doesn’t specify how the rest of the 0.25 is distributed. Assuming 0.125 each:
Expected value:
0$ * 0.0833 - 1$ * 0.75 + 2$ * 0.0833 + 1$ * 0.0833 = -0.5$
Choice 3: Choose “2-box purchased” brainstate.
Omega puts no money in the boxes. You have a 0.75 probability of buying both boxes. Assuming again that the 0.25 is distributed evenly amoung the other possibilities.
Expected value:
0$ * 0.0833 - 1$ * 0.0833 - 1$ * 0.0833 - 2$ * 0.75 = -1.6666$
In this case they should choose the “no-boxes purchased” brainstate.
If we were to break the premise of the question, and allow the agent to be able to choose their brainstate, and also choose the following action independently from that brain state, the optimal decision would be (no-boxes brainstate) → (buy both boxes)
I think this question is pretty much analogus to the original version of Newcomb’s problem, just with an extra layer of probability that complicates the calculations, but doesn’t provide any more insight. It’s still the same trickery where the apparent paradox emerges because it’s not immediately obvious that there are secretly 2 decisions being made, and the question is ambiguous because it’s not clear which decisions are actually afforded to the agent.