Two-boxing definitely entails that you are a two-boxing agent type. That’s not the same claim as the claim that the decision and the agent type are the same thing. See also my comment here. I would be interested to know your answer to my questions there (particularly the second one).
When I said ‘A and B are the same,’ I meant that it is not possible for one of A and B to have a different truth-value from the other. Two-boxing entails you are a two-boxer, but being a two-boxer also entails that you’ll two-box. But let me try and convince you based on your second question, treating the two as at least conceptually distinct.
Imagine a hypothetical time when people spoke about statistics in terms of causation rather than correlation (and suppose no one had done Pearl’s work). As you can imagine, the paradoxes would write themselves. At one point, someone would throw up his/her arms and tell everyone to stop talking about causation. And then the causalists would rebel, because causality is a sacred idea. The correlators would reply probably by constructing a situation where a third, unmeasured C caused both A and B. Newcomb’s is that problem for decision theory. CDT is in a sense right when it says one-boxing doesn’t cause there to be a million dollars in the box, that what does cause the money to be there is being a one-boxer. But, it ignores the fact that the same thing that caused there to be the million dollars also causes you to one-box—so, there may not be a causal link there very definitely is a correlation. ‘C causing both A and B’ is an instance of the simplest and most intuitive way in which correlation can be not causation, and CDT fails. EDT is looking at correlations between decisions and consequences and using that to decide.
Aside: You’re right, though, that the LW idea of a decision is somewhat different from the CDT idea. You define it as “a proposition that the agent can make true or false at will.” That definition has this really enormous black box called will—and if Omega has an arbitrarily high predictive accuracy, then it must be the case that that black box is a causal link going from Omega’s raw material for prediction (brain state) to decision. CDT, when it says that you ought to only look at causal arrows that begin at the decision, assumes that there can be no causal arrow that points to the decision (because the moment you admit that there can be a causal arrow that begins somewhere and ends at your decision, you have to admit that there can exist C that causes both your decision and a consequence without your decision actually causing the consequence). In short, the new idea of what a decision is itself causes the requirement for a new decision theory.
Two-boxing definitely entails that you are a two-boxing agent type. That’s not the same claim as the claim that the decision and the agent type are the same thing. See also my comment here. I would be interested to know your answer to my questions there (particularly the second one).
When I said ‘A and B are the same,’ I meant that it is not possible for one of A and B to have a different truth-value from the other. Two-boxing entails you are a two-boxer, but being a two-boxer also entails that you’ll two-box. But let me try and convince you based on your second question, treating the two as at least conceptually distinct.
Imagine a hypothetical time when people spoke about statistics in terms of causation rather than correlation (and suppose no one had done Pearl’s work). As you can imagine, the paradoxes would write themselves. At one point, someone would throw up his/her arms and tell everyone to stop talking about causation. And then the causalists would rebel, because causality is a sacred idea. The correlators would reply probably by constructing a situation where a third, unmeasured C caused both A and B.
Newcomb’s is that problem for decision theory. CDT is in a sense right when it says one-boxing doesn’t cause there to be a million dollars in the box, that what does cause the money to be there is being a one-boxer. But, it ignores the fact that the same thing that caused there to be the million dollars also causes you to one-box—so, there may not be a causal link there very definitely is a correlation.
‘C causing both A and B’ is an instance of the simplest and most intuitive way in which correlation can be not causation, and CDT fails. EDT is looking at correlations between decisions and consequences and using that to decide.
Aside: You’re right, though, that the LW idea of a decision is somewhat different from the CDT idea. You define it as “a proposition that the agent can make true or false at will.” That definition has this really enormous black box called will—and if Omega has an arbitrarily high predictive accuracy, then it must be the case that that black box is a causal link going from Omega’s raw material for prediction (brain state) to decision. CDT, when it says that you ought to only look at causal arrows that begin at the decision, assumes that there can be no causal arrow that points to the decision (because the moment you admit that there can be a causal arrow that begins somewhere and ends at your decision, you have to admit that there can exist C that causes both your decision and a consequence without your decision actually causing the consequence).
In short, the new idea of what a decision is itself causes the requirement for a new decision theory.