A large probability distribution over many variables allows one to deduce the direction of the causal arrows and rebuild a causal graph.
Agreed, but I don’t see the relevance. EDT isn’t “get a joint probability distribution, learn the causal graph, and then view your decision as an intervention;” if it were, it would be CDT! The mechanics behind EDT are “get a joint probability distribution, condition on the actions you’re considering, and choose the one with the highest expected value.”
Incidentally, what do you think CDT should do in the Newcomb problem?
I think this is the causal diagram that describes Newcomb’s problem* (and I’ve shaped the nodes like you would see in an influence diagram):
From that causal diagram and the embedded probabilities, we see that the Decision node and the Omega’s Prediction node are identical, and so we can reduce the diagram:
One-boxing leads to $1M, and two-boxing leads to $1k, and so one-boxing is superior to two-boxing.
Suppose one imports the assumption that Omega cannot see the future, despite it conflicting with the problem description. Then, one would think it looks like this (with the irrelevant “decision” node suppressed for brevity, because it’s identical to the Omega’s Prediction node):
Again, we can do the same reduction:
One-boxing leads to $1M, and two-boxing leads to $1k, and so one-boxing is superior to two-boxing.
*Note that this is the answer to the real question underlying the question you asked, and I suspect that almost all of the confusion surrounding Newcomb’s Problem results from asking for conclusions rather than causal diagrams.
This has helped me understand much better what you were coming at in the other subthread. I disagree that CDT ought to one-box in Newcomb, at least in the “Least Convenient World” version of Newcomb that I will describe now (which I think captures the most essential features of the problem).
In this LCW version of Newcomb, quantum mechanics is false, the universe consists in atoms moving in perfectly deterministic ways, and Omega is a Laplacian superintelligence who has registered the state of each atom a million years ago and ran forward a simulation leading to a prediction on your decision. In this case none of your diagrams seem like a good causal formalization: not only does Omega not see the future magically (so your first two don’t work), in addition the causal antecedent to his prediction is not your decision algorithm per se; it is a bunch of atoms a million years ago, which leads separately to his prediction on one side, and to your decision algorithm and your decision on the other side. The conflation you make in your last two diagrams between these three things, (“your decision”, “your decision algorithm” and “what causally influences Omega’s prediction”) does not work in this case (unless you stretch your definition of “yourself” backwards to identify yourself, i.e. your decision algorithm, with some features of the distribution of atoms a million years ago!). I don’t think CDT can endorse one-boxing in this case.
In the case where the universe is deterministic and Omega is a Laplacian superintelligence, it sees the world as a four-dimensional space and has access to all of it simultaneously. It doesn’t take magic- it takes the process you’ve explicitly given Omega!
To Omega, time is just another direction, as reversible as the others thanks to its omniscience. Saying that there could not be a causal arrow from events that occur at later times to events that occur at earlier times in the presence of Omega would be just as silly as saying that there cannot be causal arrows from events that are further to the East to events that are further to the West.
So in the LCW version of Newcomb, the first diagram perfectly describes the situation, and reduces to the second diagram. If I choose to one-box when at the button, Omega could learn that at any time it pleases by looking at the time-cube of reality. Thus, I should choose to one-box.
I disagree. I am not saying that Omega is a godlike intelligence that stands outside time and space. Omega just records the position and momentum of every atom in an initial state, feeds them into a computer, and computes a prediction for your decision. I am quite sure that with the standard meaning of “cause”, here the causal diagram is:
[Initial state of atoms] ==> [Omega’s computer] ==> [Prediction] ==> Money
while at the same time there is parallel chain of causation:
[Initial state of atoms] ==> [Your mental processes] ==> [Your decision] ==> {Money]
and no causal arrow goes from your decision to the prediction.
So I find it a weird use of language to say your decision is causally influencing Omega, just because Omega can infer (not see) what your decision will be. Unless you mean by “your decision” not the token, concrete mental process in your head, but the abstract Platonic algorithm that you use, which is duplicated inside Omega’s simulation. But this kind of thinking seems alien to the spirit of CDT.
I disagree. I am not saying that Omega is a godlike intelligence that stands outside time and space. Omega just records the position and momentum of every atom in an initial state, feeds them into a computer, and computes a prediction for your decision.
When you say a Laplacian superintelligence, I presume I can turn to the words of Laplace:
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
I’m not saying that Omega is outside of time and space- it still exists in space and acts at various times- but its omniscience is complete at all times.
I am quite sure that with the standard meaning of “cause”, here the causal diagram
Think of causes this way: if we change X, what also changes? If the world were such that I two-boxed, Omega would not have filled the second box. We change the world such that I one-box. This change requires a physical difference in the world, and that difference propagates both backwards and forwards in time. Thus, the result of that change is that Omega would have filled the second box. Thus, my action causes Omega’s action, because Omega’s action is dependent on its prediction, and its prediction is dependent on my action.
Do not import the assumption that causality cannot flow backwards in time. In the presence of Omega, that assumption is wrong, and “two-boxing” is the result of that defective assumption, not any trouble with CDT.
and no causal arrow goes from your decision to the prediction.
In your model, the only way to alter my decision, which is deterministically determined by the “initial state of atoms”, is to alter the initial state of atoms. That’s the node you should focus on, and it clearly causes both my decision and Omega’s prediction, and so if I can alter the state of the universe such that I will be a one-boxer, I should. If I don’t have that power, there’s no decision problem.
Well, I think this is becoming a dispute over the definition of “cause”, which is not a worthwhile topic. I agree with the substance of what you say. In my terminology, if an event X is entangled deterministically with events before it and events after it, it causes the events after it, is caused by the events before it, and (in conjunction with the laws of nature) logically implies both the events before and after it. You prefer to say that it causes all those events, prior or future, that we must change if we assume a change in X. Fine, then CDT says to one-box.
I just doubt this was the meaning of “cause” that the creators of CDT had in mind (given that it is standardly accepted that CDT two-boxes).
I just doubt this was the meaning of “cause” that the creators of CDT had in mind (given that it is standardly accepted that CDT two-boxes).
The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
(I am all in favour of clever alternatives to CDT. In fact, I am so in favor of them that I think they deserve their own name that doesn’t give them “CDT” connotations. Because CDT two boxes and defects against its clone.)
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT’s core is:
=\sum_jP(A%3EO_j)D(O_j))
Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I’m aware that Newcomb’s Problem confuses other people, and so they’ll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion “Calculate the expected utility of a counterfactual decision”. That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn’t mean you can fundamentally replace all the other parts of the theory—change the actual meaning represented by those letters—and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate using.
but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
Using the name for a thing that is extensively studied and taught to entire populations of students to mean doing a different thing than what all those experts and their students say it means is just silly. It may be a mistake to do what they do but they do know what it is they are doing and they get to name it because they were there first.
Spohn changed his mind in 2003, and his 2012 paper is his best endorsement of one-boxing on Newcomb using CDT. Irritatingly, his explanation doesn’t rely on the mathematics as heavily as it could- his NP1 obviously doesn’t describe the situation because a necessary condition of NP1 is that, conditioned on the reward, your action and Omega’s prediction are independent, which is false. (Hat tip to lukeprog.)
That CDTers were wrong does not mean they always will be wrong, or even that they are wrong now!
You do realise you are describing a version of CDT that almost no CDT proponent uses? It’s pretty much eliezer’s TDT. Now you could describe TDT as “CDT done properly” (in fact some people have described it as “EDT done properly”), but that’s needlessly confusing; I’ll keep using CDT to designate the old system, and TDT for the new.
You do realise you are describing a version of CDT that almost no CDT proponent uses?
Yes and no. Like I describe here, I get that most people go funny in the head when you present them with a problem where causality flows backwards in time. But the math that makes up CDT does not require its users to go funny in the head, and if they keep their wits about them, it lets them solve the problem quickly and correctly. I don’t think its proponent’s mistakes should discredit the math or require us to give the math a new name.
It’s not clear to me that we agree about the central point of the post- I think Egan’s examples are generally worthless or wrong. In the Murder Lesion, shooting is the correct decision if she doesn’t have the lesion, and the incorrect decision if she does. Whether or not she should shoot depends on how likely it is that she has the lesion. He assumes that her desire to kill Alfred is enough to make the probability she has the lesion high enough to recommend not shooting- and if you stick that information into the problem, then CDT says “don’t shoot.” Note that choosing to shoot or not won’t add or remove the lesion- and so if Mary suspects she has the lesion, she probably does so on the basis that she wouldn’t be contemplating murdering Alfred without the lesion.*
In the Psychopath button, Paul can encode the statement “only a psychopath would push the button” as the statement “if I push the button, I will be a psychopath,” and then CDT advises against pushing the button. (If psychopathy causes button-pushing, but the reverse is not true, then Paul should not be confident that only psychopaths would push the button!) This is similar to his ‘ratifiability’ idea, except instead of bolting a clunky condition onto the sleek apparatus of CDT, it just requires making a causal graph that accurately reflects the problem- and thus odd problems will have odd graphs.
In Egan’s Smoking Lesion, he doesn’t fully elaborate the problem, and makes a mistake: in his Smoking Lesion, smoking does cause cancer, and so CDT cautions against smoking (unless you’re confident enough that you don’t have the lesion that the benefits of smoking outweigh the costs, which won’t be the case for those who think they have the lesion). It amazes me that he blithely states CDT’s endorsement without running through the math to show that it’s the endorsement!
* Edited to add: I agree that if the original Smoking Lesion problem has a “desire to smoke” variable that is a perfect indicator of the presence of the lesion, then EDT can get the problem right. The trouble should be that if the “desire to smoke” variable is only partially caused by the lesion (to the point that it’s not informative enough), EDT can get lost whereas CDT will still recognize the lack of a causal arrow. I suspect, but this is a wild conjecture because I haven’t run through the math yet, that EDT will set a stricter bound on “belief that I have the murder lesion” than CDT will in the version of the Murder Lesion where there’s a “desire to kill” node which is partially caused by the lesion.
Agreed, but I don’t see the relevance. EDT isn’t “get a joint probability distribution, learn the causal graph, and then view your decision as an intervention;” if it were, it would be CDT! The mechanics behind EDT are “get a joint probability distribution, condition on the actions you’re considering, and choose the one with the highest expected value.”
I think this is the causal diagram that describes Newcomb’s problem* (and I’ve shaped the nodes like you would see in an influence diagram):
From that causal diagram and the embedded probabilities, we see that the Decision node and the Omega’s Prediction node are identical, and so we can reduce the diagram:
One-boxing leads to $1M, and two-boxing leads to $1k, and so one-boxing is superior to two-boxing.
Suppose one imports the assumption that Omega cannot see the future, despite it conflicting with the problem description. Then, one would think it looks like this (with the irrelevant “decision” node suppressed for brevity, because it’s identical to the Omega’s Prediction node):
Again, we can do the same reduction:
One-boxing leads to $1M, and two-boxing leads to $1k, and so one-boxing is superior to two-boxing.
*Note that this is the answer to the real question underlying the question you asked, and I suspect that almost all of the confusion surrounding Newcomb’s Problem results from asking for conclusions rather than causal diagrams.
This has helped me understand much better what you were coming at in the other subthread. I disagree that CDT ought to one-box in Newcomb, at least in the “Least Convenient World” version of Newcomb that I will describe now (which I think captures the most essential features of the problem).
In this LCW version of Newcomb, quantum mechanics is false, the universe consists in atoms moving in perfectly deterministic ways, and Omega is a Laplacian superintelligence who has registered the state of each atom a million years ago and ran forward a simulation leading to a prediction on your decision. In this case none of your diagrams seem like a good causal formalization: not only does Omega not see the future magically (so your first two don’t work), in addition the causal antecedent to his prediction is not your decision algorithm per se; it is a bunch of atoms a million years ago, which leads separately to his prediction on one side, and to your decision algorithm and your decision on the other side. The conflation you make in your last two diagrams between these three things, (“your decision”, “your decision algorithm” and “what causally influences Omega’s prediction”) does not work in this case (unless you stretch your definition of “yourself” backwards to identify yourself, i.e. your decision algorithm, with some features of the distribution of atoms a million years ago!). I don’t think CDT can endorse one-boxing in this case.
In the case where the universe is deterministic and Omega is a Laplacian superintelligence, it sees the world as a four-dimensional space and has access to all of it simultaneously. It doesn’t take magic- it takes the process you’ve explicitly given Omega!
To Omega, time is just another direction, as reversible as the others thanks to its omniscience. Saying that there could not be a causal arrow from events that occur at later times to events that occur at earlier times in the presence of Omega would be just as silly as saying that there cannot be causal arrows from events that are further to the East to events that are further to the West.
So in the LCW version of Newcomb, the first diagram perfectly describes the situation, and reduces to the second diagram. If I choose to one-box when at the button, Omega could learn that at any time it pleases by looking at the time-cube of reality. Thus, I should choose to one-box.
I disagree. I am not saying that Omega is a godlike intelligence that stands outside time and space. Omega just records the position and momentum of every atom in an initial state, feeds them into a computer, and computes a prediction for your decision. I am quite sure that with the standard meaning of “cause”, here the causal diagram is:
[Initial state of atoms] ==> [Omega’s computer] ==> [Prediction] ==> Money
while at the same time there is parallel chain of causation:
[Initial state of atoms] ==> [Your mental processes] ==> [Your decision] ==> {Money]
and no causal arrow goes from your decision to the prediction.
So I find it a weird use of language to say your decision is causally influencing Omega, just because Omega can infer (not see) what your decision will be. Unless you mean by “your decision” not the token, concrete mental process in your head, but the abstract Platonic algorithm that you use, which is duplicated inside Omega’s simulation. But this kind of thinking seems alien to the spirit of CDT.
When you say a Laplacian superintelligence, I presume I can turn to the words of Laplace:
I’m not saying that Omega is outside of time and space- it still exists in space and acts at various times- but its omniscience is complete at all times.
Think of causes this way: if we change X, what also changes? If the world were such that I two-boxed, Omega would not have filled the second box. We change the world such that I one-box. This change requires a physical difference in the world, and that difference propagates both backwards and forwards in time. Thus, the result of that change is that Omega would have filled the second box. Thus, my action causes Omega’s action, because Omega’s action is dependent on its prediction, and its prediction is dependent on my action.
Do not import the assumption that causality cannot flow backwards in time. In the presence of Omega, that assumption is wrong, and “two-boxing” is the result of that defective assumption, not any trouble with CDT.
In your model, the only way to alter my decision, which is deterministically determined by the “initial state of atoms”, is to alter the initial state of atoms. That’s the node you should focus on, and it clearly causes both my decision and Omega’s prediction, and so if I can alter the state of the universe such that I will be a one-boxer, I should. If I don’t have that power, there’s no decision problem.
Well, I think this is becoming a dispute over the definition of “cause”, which is not a worthwhile topic. I agree with the substance of what you say. In my terminology, if an event X is entangled deterministically with events before it and events after it, it causes the events after it, is caused by the events before it, and (in conjunction with the laws of nature) logically implies both the events before and after it. You prefer to say that it causes all those events, prior or future, that we must change if we assume a change in X. Fine, then CDT says to one-box.
I just doubt this was the meaning of “cause” that the creators of CDT had in mind (given that it is standardly accepted that CDT two-boxes).
The math behind CDT does not require or imply the temporal assumption of causality, just counterfactual reasoning. I believe that two-boxing proponents of CDT are confused about Newcomb’s Problem, and fall prey to broken verbal arguments instead of trusting their pictures and their math.
People who talk about a “CDT” that does not two box are not talking about CDT but instead talking about some other clever thing that does not happen to be CDT (or just being wrong). The very link you provide is not ambiguous on this subject.
(I am all in favour of clever alternatives to CDT. In fact, I am so in favor of them that I think they deserve their own name that doesn’t give them “CDT” connotations. Because CDT two boxes and defects against its clone.)
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT’s core is:
=\sum_jP(A%3EO_j)D(O_j))Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I’m aware that Newcomb’s Problem confuses other people, and so they’ll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion “Calculate the expected utility of a counterfactual decision”. That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn’t mean you can fundamentally replace all the other parts of the theory—change the actual meaning represented by those letters—and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate using.
Using the name for a thing that is extensively studied and taught to entire populations of students to mean doing a different thing than what all those experts and their students say it means is just silly. It may be a mistake to do what they do but they do know what it is they are doing and they get to name it because they were there first.
Spohn changed his mind in 2003, and his 2012 paper is his best endorsement of one-boxing on Newcomb using CDT. Irritatingly, his explanation doesn’t rely on the mathematics as heavily as it could- his NP1 obviously doesn’t describe the situation because a necessary condition of NP1 is that, conditioned on the reward, your action and Omega’s prediction are independent, which is false. (Hat tip to lukeprog.)
That CDTers were wrong does not mean they always will be wrong, or even that they are wrong now!
You do realise you are describing a version of CDT that almost no CDT proponent uses? It’s pretty much eliezer’s TDT. Now you could describe TDT as “CDT done properly” (in fact some people have described it as “EDT done properly”), but that’s needlessly confusing; I’ll keep using CDT to designate the old system, and TDT for the new.
Yes and no. Like I describe here, I get that most people go funny in the head when you present them with a problem where causality flows backwards in time. But the math that makes up CDT does not require its users to go funny in the head, and if they keep their wits about them, it lets them solve the problem quickly and correctly. I don’t think its proponent’s mistakes should discredit the math or require us to give the math a new name.
It seems we are now quibbling about vocabulary—hence we no longer disagree :-)
Great!
It’s not clear to me that we agree about the central point of the post- I think Egan’s examples are generally worthless or wrong. In the Murder Lesion, shooting is the correct decision if she doesn’t have the lesion, and the incorrect decision if she does. Whether or not she should shoot depends on how likely it is that she has the lesion. He assumes that her desire to kill Alfred is enough to make the probability she has the lesion high enough to recommend not shooting- and if you stick that information into the problem, then CDT says “don’t shoot.” Note that choosing to shoot or not won’t add or remove the lesion- and so if Mary suspects she has the lesion, she probably does so on the basis that she wouldn’t be contemplating murdering Alfred without the lesion.*
In the Psychopath button, Paul can encode the statement “only a psychopath would push the button” as the statement “if I push the button, I will be a psychopath,” and then CDT advises against pushing the button. (If psychopathy causes button-pushing, but the reverse is not true, then Paul should not be confident that only psychopaths would push the button!) This is similar to his ‘ratifiability’ idea, except instead of bolting a clunky condition onto the sleek apparatus of CDT, it just requires making a causal graph that accurately reflects the problem- and thus odd problems will have odd graphs.
In Egan’s Smoking Lesion, he doesn’t fully elaborate the problem, and makes a mistake: in his Smoking Lesion, smoking does cause cancer, and so CDT cautions against smoking (unless you’re confident enough that you don’t have the lesion that the benefits of smoking outweigh the costs, which won’t be the case for those who think they have the lesion). It amazes me that he blithely states CDT’s endorsement without running through the math to show that it’s the endorsement!
* Edited to add: I agree that if the original Smoking Lesion problem has a “desire to smoke” variable that is a perfect indicator of the presence of the lesion, then EDT can get the problem right. The trouble should be that if the “desire to smoke” variable is only partially caused by the lesion (to the point that it’s not informative enough), EDT can get lost whereas CDT will still recognize the lack of a causal arrow. I suspect, but this is a wild conjecture because I haven’t run through the math yet, that EDT will set a stricter bound on “belief that I have the murder lesion” than CDT will in the version of the Murder Lesion where there’s a “desire to kill” node which is partially caused by the lesion.
Ps: your graphs are very similar to mine: http://lesswrong.com/lw/f37/naive_tdt_bayes_nets_and_counterfactual_mugging/