The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
(I am all in favour of clever alternatives to CDT. In fact, I am so in favor of them that I think they deserve their own name that doesn’t give them “CDT” connotations. Because CDT two boxes and defects against its clone.)
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT’s core is:
=\sum_jP(A%3EO_j)D(O_j))
Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I’m aware that Newcomb’s Problem confuses other people, and so they’ll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion “Calculate the expected utility of a counterfactual decision”. That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn’t mean you can fundamentally replace all the other parts of the theory—change the actual meaning represented by those letters—and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate using.
but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
Using the name for a thing that is extensively studied and taught to entire populations of students to mean doing a different thing than what all those experts and their students say it means is just silly. It may be a mistake to do what they do but they do know what it is they are doing and they get to name it because they were there first.
Spohn changed his mind in 2003, and his 2012 paper is his best endorsement of one-boxing on Newcomb using CDT. Irritatingly, his explanation doesn’t rely on the mathematics as heavily as it could- his NP1 obviously doesn’t describe the situation because a necessary condition of NP1 is that, conditioned on the reward, your action and Omega’s prediction are independent, which is false. (Hat tip to lukeprog.)
That CDTers were wrong does not mean they always will be wrong, or even that they are wrong now!
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
(I am all in favour of clever alternatives to CDT. In fact, I am so in favor of them that I think they deserve their own name that doesn’t give them “CDT” connotations. Because CDT two boxes and defects against its clone.)
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT’s core is:
=\sum_jP(A%3EO_j)D(O_j))Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I’m aware that Newcomb’s Problem confuses other people, and so they’ll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion “Calculate the expected utility of a counterfactual decision”. That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn’t mean you can fundamentally replace all the other parts of the theory—change the actual meaning represented by those letters—and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate using.
Using the name for a thing that is extensively studied and taught to entire populations of students to mean doing a different thing than what all those experts and their students say it means is just silly. It may be a mistake to do what they do but they do know what it is they are doing and they get to name it because they were there first.
Spohn changed his mind in 2003, and his 2012 paper is his best endorsement of one-boxing on Newcomb using CDT. Irritatingly, his explanation doesn’t rely on the mathematics as heavily as it could- his NP1 obviously doesn’t describe the situation because a necessary condition of NP1 is that, conditioned on the reward, your action and Omega’s prediction are independent, which is false. (Hat tip to lukeprog.)
That CDTers were wrong does not mean they always will be wrong, or even that they are wrong now!