In the case where the universe is deterministic and Omega is a Laplacian superintelligence, it sees the world as a four-dimensional space and has access to all of it simultaneously. It doesn’t take magic- it takes the process you’ve explicitly given Omega!
To Omega, time is just another direction, as reversible as the others thanks to its omniscience. Saying that there could not be a causal arrow from events that occur at later times to events that occur at earlier times in the presence of Omega would be just as silly as saying that there cannot be causal arrows from events that are further to the East to events that are further to the West.
So in the LCW version of Newcomb, the first diagram perfectly describes the situation, and reduces to the second diagram. If I choose to one-box when at the button, Omega could learn that at any time it pleases by looking at the time-cube of reality. Thus, I should choose to one-box.
I disagree. I am not saying that Omega is a godlike intelligence that stands outside time and space. Omega just records the position and momentum of every atom in an initial state, feeds them into a computer, and computes a prediction for your decision. I am quite sure that with the standard meaning of “cause”, here the causal diagram is:
[Initial state of atoms] ==> [Omega’s computer] ==> [Prediction] ==> Money
while at the same time there is parallel chain of causation:
[Initial state of atoms] ==> [Your mental processes] ==> [Your decision] ==> {Money]
and no causal arrow goes from your decision to the prediction.
So I find it a weird use of language to say your decision is causally influencing Omega, just because Omega can infer (not see) what your decision will be. Unless you mean by “your decision” not the token, concrete mental process in your head, but the abstract Platonic algorithm that you use, which is duplicated inside Omega’s simulation. But this kind of thinking seems alien to the spirit of CDT.
I disagree. I am not saying that Omega is a godlike intelligence that stands outside time and space. Omega just records the position and momentum of every atom in an initial state, feeds them into a computer, and computes a prediction for your decision.
When you say a Laplacian superintelligence, I presume I can turn to the words of Laplace:
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
I’m not saying that Omega is outside of time and space- it still exists in space and acts at various times- but its omniscience is complete at all times.
I am quite sure that with the standard meaning of “cause”, here the causal diagram
Think of causes this way: if we change X, what also changes? If the world were such that I two-boxed, Omega would not have filled the second box. We change the world such that I one-box. This change requires a physical difference in the world, and that difference propagates both backwards and forwards in time. Thus, the result of that change is that Omega would have filled the second box. Thus, my action causes Omega’s action, because Omega’s action is dependent on its prediction, and its prediction is dependent on my action.
Do not import the assumption that causality cannot flow backwards in time. In the presence of Omega, that assumption is wrong, and “two-boxing” is the result of that defective assumption, not any trouble with CDT.
and no causal arrow goes from your decision to the prediction.
In your model, the only way to alter my decision, which is deterministically determined by the “initial state of atoms”, is to alter the initial state of atoms. That’s the node you should focus on, and it clearly causes both my decision and Omega’s prediction, and so if I can alter the state of the universe such that I will be a one-boxer, I should. If I don’t have that power, there’s no decision problem.
Well, I think this is becoming a dispute over the definition of “cause”, which is not a worthwhile topic. I agree with the substance of what you say. In my terminology, if an event X is entangled deterministically with events before it and events after it, it causes the events after it, is caused by the events before it, and (in conjunction with the laws of nature) logically implies both the events before and after it. You prefer to say that it causes all those events, prior or future, that we must change if we assume a change in X. Fine, then CDT says to one-box.
I just doubt this was the meaning of “cause” that the creators of CDT had in mind (given that it is standardly accepted that CDT two-boxes).
I just doubt this was the meaning of “cause” that the creators of CDT had in mind (given that it is standardly accepted that CDT two-boxes).
The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
(I am all in favour of clever alternatives to CDT. In fact, I am so in favor of them that I think they deserve their own name that doesn’t give them “CDT” connotations. Because CDT two boxes and defects against its clone.)
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT’s core is:
=\sum_jP(A%3EO_j)D(O_j))
Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I’m aware that Newcomb’s Problem confuses other people, and so they’ll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion “Calculate the expected utility of a counterfactual decision”. That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn’t mean you can fundamentally replace all the other parts of the theory—change the actual meaning represented by those letters—and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate using.
but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
Using the name for a thing that is extensively studied and taught to entire populations of students to mean doing a different thing than what all those experts and their students say it means is just silly. It may be a mistake to do what they do but they do know what it is they are doing and they get to name it because they were there first.
Spohn changed his mind in 2003, and his 2012 paper is his best endorsement of one-boxing on Newcomb using CDT. Irritatingly, his explanation doesn’t rely on the mathematics as heavily as it could- his NP1 obviously doesn’t describe the situation because a necessary condition of NP1 is that, conditioned on the reward, your action and Omega’s prediction are independent, which is false. (Hat tip to lukeprog.)
That CDTers were wrong does not mean they always will be wrong, or even that they are wrong now!
In the case where the universe is deterministic and Omega is a Laplacian superintelligence, it sees the world as a four-dimensional space and has access to all of it simultaneously. It doesn’t take magic- it takes the process you’ve explicitly given Omega!
To Omega, time is just another direction, as reversible as the others thanks to its omniscience. Saying that there could not be a causal arrow from events that occur at later times to events that occur at earlier times in the presence of Omega would be just as silly as saying that there cannot be causal arrows from events that are further to the East to events that are further to the West.
So in the LCW version of Newcomb, the first diagram perfectly describes the situation, and reduces to the second diagram. If I choose to one-box when at the button, Omega could learn that at any time it pleases by looking at the time-cube of reality. Thus, I should choose to one-box.
I disagree. I am not saying that Omega is a godlike intelligence that stands outside time and space. Omega just records the position and momentum of every atom in an initial state, feeds them into a computer, and computes a prediction for your decision. I am quite sure that with the standard meaning of “cause”, here the causal diagram is:
[Initial state of atoms] ==> [Omega’s computer] ==> [Prediction] ==> Money
while at the same time there is parallel chain of causation:
[Initial state of atoms] ==> [Your mental processes] ==> [Your decision] ==> {Money]
and no causal arrow goes from your decision to the prediction.
So I find it a weird use of language to say your decision is causally influencing Omega, just because Omega can infer (not see) what your decision will be. Unless you mean by “your decision” not the token, concrete mental process in your head, but the abstract Platonic algorithm that you use, which is duplicated inside Omega’s simulation. But this kind of thinking seems alien to the spirit of CDT.
When you say a Laplacian superintelligence, I presume I can turn to the words of Laplace:
I’m not saying that Omega is outside of time and space- it still exists in space and acts at various times- but its omniscience is complete at all times.
Think of causes this way: if we change X, what also changes? If the world were such that I two-boxed, Omega would not have filled the second box. We change the world such that I one-box. This change requires a physical difference in the world, and that difference propagates both backwards and forwards in time. Thus, the result of that change is that Omega would have filled the second box. Thus, my action causes Omega’s action, because Omega’s action is dependent on its prediction, and its prediction is dependent on my action.
Do not import the assumption that causality cannot flow backwards in time. In the presence of Omega, that assumption is wrong, and “two-boxing” is the result of that defective assumption, not any trouble with CDT.
In your model, the only way to alter my decision, which is deterministically determined by the “initial state of atoms”, is to alter the initial state of atoms. That’s the node you should focus on, and it clearly causes both my decision and Omega’s prediction, and so if I can alter the state of the universe such that I will be a one-boxer, I should. If I don’t have that power, there’s no decision problem.
Well, I think this is becoming a dispute over the definition of “cause”, which is not a worthwhile topic. I agree with the substance of what you say. In my terminology, if an event X is entangled deterministically with events before it and events after it, it causes the events after it, is caused by the events before it, and (in conjunction with the laws of nature) logically implies both the events before and after it. You prefer to say that it causes all those events, prior or future, that we must change if we assume a change in X. Fine, then CDT says to one-box.
I just doubt this was the meaning of “cause” that the creators of CDT had in mind (given that it is standardly accepted that CDT two-boxes).
The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
(I am all in favour of clever alternatives to CDT. In fact, I am so in favor of them that I think they deserve their own name that doesn’t give them “CDT” connotations. Because CDT two boxes and defects against its clone.)
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT’s core is:
=\sum_jP(A%3EO_j)D(O_j))Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I’m aware that Newcomb’s Problem confuses other people, and so they’ll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion “Calculate the expected utility of a counterfactual decision”. That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn’t mean you can fundamentally replace all the other parts of the theory—change the actual meaning represented by those letters—and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate using.
Using the name for a thing that is extensively studied and taught to entire populations of students to mean doing a different thing than what all those experts and their students say it means is just silly. It may be a mistake to do what they do but they do know what it is they are doing and they get to name it because they were there first.
Spohn changed his mind in 2003, and his 2012 paper is his best endorsement of one-boxing on Newcomb using CDT. Irritatingly, his explanation doesn’t rely on the mathematics as heavily as it could- his NP1 obviously doesn’t describe the situation because a necessary condition of NP1 is that, conditioned on the reward, your action and Omega’s prediction are independent, which is false. (Hat tip to lukeprog.)
That CDTers were wrong does not mean they always will be wrong, or even that they are wrong now!