Uh, doesn’t the naive CDT setup for Newcomb’s problem normally include a “my innards” node that has arrows going to both B and P?
If you decide what your innards are, and not what your action is, then this matches the problem description. If you can somehow have dishonest innards (Omega thinks I’m a one-boxer, then I can two-box), then this again violates the perfect prediction assumption.
I believe, as an empirical question, the first explicitly CDT accounts of Newcomb’s problem did not use graphs, but if you convert their argument into a graph, it implicitly assumes “B → M ← P.”
If you can somehow have dishonest innards (Omega thinks I’m a one-boxer, then I can two-box), then this again violates the perfect prediction assumption.
Isn’t the whole point of CDT that you cut any arrows from ancestor nodes with do(A) where A is your “intervention”? Obviously you can’t have your innards imply your action if you explicitly violate that connection by describing your decision as an intervention.
Here is how I understood typical CDT accounts of Newcomb’s problem: You have a graph given by B <- Innards -> P and B -> M <- P. Innards starts with some arbitrary prior probability since you don’t know your decision beforehand. You perturb the graph by deleting Innards -> B in order to calculate p(M | do(B)), and in doing so you end up with a graph “looking like” B -> M <- P. Then the usual “dominance” arguments determine the decision regardless of the prior probability on Innards.
Of course, after doing this analysis and coming up with a decision you now know (unconditionally) the value of B and therefore Innards, so arguably the probabilities for those should be set to 1 or 0 as appropriate in the original graph. This is generally interpreted by CDTists as a proof that this agent always two-boxes, and always gets the smaller reward.
Isn’t the whole point of CDT that you cut any arrows from ancestor nodes with do(A) where A is your “intervention”?
Yes. My point is that when you have a supernatural Omega, then putting any of Omega’s actions in ancestor nodes of your decisions, instead of descendant nodes of your decisions, is a mistake that violates the problem description.
But if you don’t delete the incoming arches on your decision nodes then it isn’t CDT anymore, it’s just EDT.
Which begs the question of why we should bother with CDT in the first place. Some people claim that EDT fails at “smoking lesion” type of problems, but I think it is due to incorrect modelling or underspecification of the problem. If you use the correct model EDT produces the “right” answer. It seems to me that EDT is superior to CDT.
(Ilya Shpitser will disagree, but I never understood his arguments)
The trick is to construct a system that deals with things 20 times more complicated than smoking lesion. That system is recent, and you will have to read e.g. my thesis, or Jin Tian’s thesis, or elsewhere to see what it is.
I have yet to see anyone advocating EDT actually handle a complicated example correctly. Or even a simple tricky example, e.g. the front door case.
But if you don’t delete the incoming arches on your decision nodes then it isn’t CDT anymore, it’s just EDT.
You still delete incoming arcs when you make a decision. The argument is that if Omega perfectly predicts your decision, then causally his prediction must be a descendant of your decision, rather than an ancestor, because if it were an ancestor you would sever the connection that is still solid (and thus violate the problem description).
(Ilya Shpitser will disagree, but I never understood his arguments)
This is a shame, because he’s right. Here’s my brief attempt at an explanation of the difference between the two:
EDT uses the joint probability distribution. If you want to express a joint probability distribution as a graphical Bayesian network, then the direction of the arrows doesn’t matter (modulo some consistency concerns). If you utilize your human intelligence, you might be able to figure out “okay, for this particular action, we condition on X but not on Y,” but you do this for intuitive reasons that may be hard to formalize and which you might get wrong. When you use the joint probability distribution, you inherently assume that all correlation is causation, unless you’ve specifically added a node or data to block causation for any particular correlation.
CDT uses the causal network, where the direction of the arrows is informative. You can tell the difference between altering and observing something, in that observations condition things both up and down the causal graph, whereas alterations only condition things down the causal graph. You only need to use your human intelligence to build the right graph, and then the math can take over from there. For example, consider price controls: there’s a difference between observing that the price of an ounce of gold is $100 and altering the price of an ounce of gold to be $100. And causal networks allow you to answer questions like “given that the price of gold is observed to be $100, what will happen when we force the price of gold to be $120?”
Now, if you look at the math, you can see a way to embed a causal network in a network without causation. So we could use more complicated networks and let conditioning on nodes do the graph severing for us. I think this is a terrible idea, both philosophically and computationally, because it entails more work and less clarity, both of which are changes in the wrong direction.
You still delete incoming arcs when you make a decision. The argument is that if Omega perfectly predicts your decision, then causally his prediction must be a descendant of your decision, rather than an ancestor, because if it were an ancestor you would sever the connection that is still solid (and thus violate the problem description).
If I understand correctly, in causal networks the orientation of the arches must respect “physical causality”, which I roughly understand to mean consistency with the thermodynamical arrow of time. There is no way for your action to cause Omega’s prediction in this sense, unless time travel is involved.
EDT uses the joint probability distribution. If you want to express a joint probability distribution as a graphical Bayesian network, then the direction of the arrows doesn’t matter (modulo some consistency concerns).
Yes, different Bayesian networks can represent the same probability distribution. And why would that be a problem? The probability distribution and your utility function are all that matters.
When you use the joint probability distribution, you inherently assume that all correlation is causation, unless you’ve specifically added a node or data to block causation for any particular correlation.
“Correlation vs causation” is an epistemic error. If you are making it then you are using the wrong probability distribution, not a “wrong” factorization of the correct probability distribution.
If I understand correctly, in causal networks the orientation of the arches must respect “physical causality”, which I roughly understand to mean consistency with the thermodynamical arrow of time.
In the real world, this is correct, but it is not mathematically necessary. (To go up a meta level, this is about how you build causal networks in the first place, not about how you reason once you have a causal network; even if philosophers were right about CDT as the method to go from causal networks to decisions, they seem to have been confused about the method by which one goes from English problem statements to causal networks when it comes to Newcomb’s problem.)
unless time travel is involved.
It is. How else can Omega be a perfect predictor? (I may be stretching the language, but I count Laplace’s Demon as a time traveler, since it can ‘see’ the world at any time, even though it can only affect the world at the time that it’s at.)
Yes, different Bayesian networks can represent the same probability distribution. And why would that be a problem?
The problem is that you can’t put any meaning into the direction of the arrows because they’re arbitrary.
“Correlation vs causation” is an epistemic error. If you are making it then you are using the wrong probability distribution, not a “wrong” factorization of the correct probability distribution.
If you give me a causal diagram and the embedded probabilities for the environment, and ask me to predict what would happen if you did action A (i.e. counterfactual reasoning), you’ve already given me all I need to calculate the probabilities of any of the other nodes you might be interested in, for any action included in the environment description.
If you give me a joint probability distribution for the environment, and ask me to predict what would happen if you did action A, I don’t have enough information to calculate the probabilities of the other nodes. You need to give me a different joint probability distribution for every possible action you could take. This requires a painful amount of communication, but possibly worse is that there’s no obvious type difference between the joint probability distribution for the environment and for the environment given a particular action—and if I calculate the consequences of an action given the whole environment’s data, I can get it wrong.
In the real world, this is correct, but it is not mathematically necessary.
If you take physical causality out of the picture, then the arches orientation is underspecified in the general case. But then, since you are only allowed to cut arches that are incoming to the decision nodes, your decision model will be underspecified.
It is. How else can Omega be a perfect predictor?
If you are going to allow time travel, defined in a broad sense, then your casual network will have cycles.
The problem is that you can’t put any meaning into the direction of the arrows because they’re arbitrary.
But the point is that in EDT you don’t care about the direction of the arrows.
If you give me a causal diagram and the embedded probabilities for the environment, and ask me to predict what would happen if you did action A (i.e. counterfactual reasoning), you’ve already given me all I need to calculate the probabilities of any of the other nodes you might be interested in, for any action included in the environment description.
If I give you a casual diagram for Newcomb’s problem (or some variation of thereof) you will make a wrong prediction, because causal diagrams can’t properly represent it.
If you give me a joint probability distribution for the environment, and ask me to predict what would happen if you did action A, I don’t have enough information to calculate the probabilities of the other nodes.
If the model includes the myself as well as the environment, you will be able to make the correct prediction.
Of course, if you give this prediction back to me, and it influences my decision, then the model has to include you as well. Which may, in principle, cause Godelian self-reference issues. But that’s a fundamental limit of the logic capabilities of any computable system, there are no easy ways around it. But that’s not as bad as it sounds: the fact that you can’t precisely predict everything about yourself doesn’t mean that you can’t predict anything or that you can’t make approximate predictions. (for instance, GCC can compile and optimize GCC)
Causal decision models are one way to approximate hard decision problems, and they work well in many practical cases. Newcomb-like scenarios are specifically designed to make them fail.
But the point is that in EDT you don’t care about the direction of the arrows.
Yes, and because EDT does not assign meaning to the direction of the arrows is why it’s a less powerful language for describing environments.
If I give you a casual diagram for Newcomb’s problem (or some variation of thereof) you will make a wrong prediction, because causal diagrams can’t properly represent it.
If you allow retrocausation, I don’t see why you think this is the case.
I’m not sure what we are disagreeing about. In CDT you need causal Bayesian networks where the arrow orientation reflects physical causality. In EDT you just need probability distributions. You can represent them as Bayesian networks, but in this case arrow direction doesn’t matter, up to certain consistency constraints.
Why would EDT not having causal arrows be a problem?
Disagree. The directionality of causation appears to be a consequence of the Second Law of Thermodynamics, which is not a fundamental law.
All the microscopic laws are completely compatible with there being a region of space-time more or less like ours, but in reverse, with entropy decreasing monotonically. In fact, in a sufficiently large world, such a region is to be expected, since the Second Law is probabilistic. In this region, matches will light before (from our perspective) they are struck, and ripples in a pond will coalesce to a single point and eject a rock from the pond. If we use nodes similar to the ones we do in our environment, then in order to preserve the Causal Markov Condition, we would have to draw arrows in the opposite temporal direction.
Causation is not a useful concept when we’re talking about the fundamental level of nature, precisely because all fundamental interactions (with some very obscure exceptions) are completely time-symmetric. Causation (and the whole DAG framework) becomes useful when we move to the macroscopic world of temporally asymmetric phenomena. And the temporal asymmetry is just a manifestation of the Second Law.
Causation is not a useful concept when we’re talking about the fundamental level of nature, precisely because all fundamental interactions (with some very obscure exceptions) are completely time-symmetric.
Assuming CPT symmetry, the very reason why there’s still matter in the universe (as opposed to it all having annihilated with antimatter) in the first place must be one of those very obscure exceptions.
It’s true that CP-violations appear to be a necessary condition for the baryon asymmetry (if you make certain natural-seeming assumptions). It’s another question whether the observed CP-violations are sufficient for the asymmetry, if the other Sakharov conditions are met. And one of the open problems in contemporary cosmology is precisely that they don’t appear to be sufficient, that the subtle CP-violations we have observed so far (only in four types of mesons) are too subtle to account for the huge asymmetry between matter and anti-matter. They would only account for a tiny amount of that asymmetry. So, yeah, the actual violations of T-symmetry we see are in fact obscure exceptions. They are not sufficient to account for either the pervasive time asymmetry of macroscopic phenomena or the pervasive baryon asymmetry at the microscopic level. There are two ways to go from here: either there must be much more significant CP-violations that we haven’t yet been able to observe, or the whole Sakharov approach of accounting for the baryon asymmetry dynamically is wrong, and we have to turn to another kind of explanation (anthropic, maybe?). The latter option is what we have settled on when it comes to time asymmetry—we have realized that a fundamental single-universe dynamical explanation for the Second Law is not on the cards—and it may well turn out to be the right option for the baryon asymmetry as well.
It’s also worth noting that CP-violations by themselves would be insufficient to account for the asymmetry, even if they were less obscure than they appear to be. You also need the Second Law of Thermodynamics (this is the third Sakharov condition). In thermodynamic equilibrium any imbalance between matter and anti-matter generated by CP-violating interactions would be undone.
In any case, even if it turns out that CP-violating interactions are plentiful enough to account for the baryon asymmetry, they still could not possibly account for macroscopic temporal asymmetry. The particular sort of temporal asymmetry we see in the macroscopic world involves the disappearance of macroscopically available information. Microscopic CP-violations are information-preserving (they are CPT symmetric), so they cannot account for this type of asymmetry. If there is going to be a fundamental explanation for the arrow of time it would have to involve laws that don’t preserve information. The only serious candidate for this so far is (real, not instrumental) wavefunction collapse, and we all know how that theory is regarded around these parts.
I should make clear that by ‘fundamental’ I was not speaking in terms of physics, but in terms of decision theory, where causation does seem to be of central importance.
If we use nodes similar to the ones we do in our environment, then in order to preserve the Causal Markov Condition, we would have to draw arrows in the opposite temporal direction.
This reads to me like “conditioning on us being in a weird part of the universe where less likely events are more likely, then when we apply the assumption that we’re in a normal part of the universe where more likely events are more likely we get weird results.” And, yes, I agree with that reading, and I’m not sure what you want that to imply.
I wanted to imply that the temporal directionality of causation is a consequence of the Second Law of Thermodynamics. I guess the point would be that the “less likely” and “more likely” in your gloss are only correct if you restrict yourself to a macroscopic level of description. Described microscopically, both regions are equally likely, according to standard statistical mechanics. This is related to the idea that non-fundamental macroscopic factors make a difference when it comes to the direction of causal influence.
But yeah, this was based on misreading your use of “fundamental” as referring to physical fundamentality. If you meant decision-theoretically fundamental, then I agree with you. I thought you were espousing the Yudkowsky-esque line that causal relations are part of the fundamental furniture of the universe and that the Causal Markov Condition is deeper and more fundamental than the Second Law of Thermodynamics.
“Correlation vs causation” is an epistemic error. If you are making it then you are using the wrong probability distribution, not a “wrong” factorization of the correct probability distribution.
The point is here is that if you have the correct probability distribution, all its predictions will be correct (ie. have minimum expected regret). It seems that the difference between epistemology and decision theory can’t be emphasized enough. If it’s possible for your “mixing up correlation and causation” to result in you making an incorrect prediction and being surprised (when a different prediction would have been systematically more accurate), then there must be an error in your probability distribution.
If you give me a joint probability distribution for the environment, and ask me to predict what would happen if you did action A, I don’t have enough information to calculate the probabilities of the other nodes.
But an arbitrary joint probability distribution can assign P(stuff | action=A) to any values whatsoever. What stops you from just setting all conditional probabilities to the correct values (ie. those values such that they “predict what would happen if you did action A” correctly, which would be the output of P(stuff|do(A)) on the “correct” causal graph)?
And furthermore, if that joint distribution does make optimal predictions (assuming that this “counterfactual reasoning” results in optimal predictions, because I can’t see any other reason you’d use a set of probabilities), then clearly it must be the probability distribution that is mandated by Cox’s theorem, etc etc.
Note, there is a free variable in the above, which is the unconditional probabilities P(A). But as long as the optimal P(A) values are all nonzero (which is the case if you don’t know the agent’s algorithm, for example), the optimality of the joint distribution requires P(stuff|A) to be correct.
So it would seem like if you have the correct probablity distribution, you can predict what would happen if I did action A, by virtue of me giving you the answers. Unless I’ve made a fatal mistake in the above argument.
If it’s possible for your “mixing up correlation and causation” to result in you making an incorrect prediction and being surprised (when a different prediction would have been systematically more accurate), then there must be an error in your probability distribution.
In the smoking lesion variant where smoking is actually protective against cancer, but not enough to overcome the damage done by the lesion (leading to a Simpson’s Paradox), standard EDT recommends against smoking (because it increases your chance of having a lesion) and standard CDT recommends for smoking (because you sever the link to having a lesion, and so only the positive direct effect remains). They give different estimates of difference of probability of getting cancer given that you chose to start smoking and the probability of getting cancer given that you chose to not smoke, because EDT doesn’t natively understand the difference between “are a smoker” and “chose to start smoking.” If you understand the difference, you can fudge things so that EDT works while you’re actively putting effort into it.
But an arbitrary joint probability distribution can assign P(stuff | action=A) to any values whatsoever.
This is correct. You can remove the causality from a causal network and just use EDT on a joint probability distribution at the cost of increasing the number of nodes and the fan-in for each node. Since the memory requirements are exponential in fan-in and linear in number of nodes, this is a bad idea.
Besides the memory requirements, this adds another problem: in a causal network, we share parameters that are not shared in the ‘decaused’ network. This is necessary in order to be able to represent all possible mutilated graphs as marginals of the joint probability distribution, but means that if we’re trying to learn the parameters from observational data instead of getting from another source, we need much more data to get estimates that are as good. We can apply equality constraints, but then we might as well use CDT because we’re either using the equality constraints implied by CDT (and are thus correct) or we screwed something up.
There also seem to be numerous philosophical benefits to using the language of counterfactuals and conditionals, over just the language of conditionals. Causal networks really are more powerful, in the sense that Paul Graham describes here.
So it would seem like if you have the correct probablity distribution, you can predict what would happen if I did action A, by virtue of me giving you the answers.
If you give me a joint probability distribution which I can marginalize over any possible action, yes, I can do those predictions because you gave me the answers.
But what use is an algorithm that, when you give it the answers, merely doesn’t destroy them? We want something that takes environments as inputs and outputs decisions as outputs, because then it will do the work for us.
In the smoking lesion variant where smoking is actually protective against cancer, but not enough to overcome the damage done by the lesion …
I tend to be sceptical of smoking lesion arguments on account of how the scenario seems be always either underspecified or contradictory. For example, how can any agents in the smoking lesion problem be EDT agents at all?
If they always take the action recommended by EDT, and there is exactly one such action, then they must all take the same action. But in that case there can’t possibly be the postulated connection between the lesion and smoking (conditional on being an EDT agent). So an EDT agent that knows it implements EDT can’t believe that its decision to smoke affects the chances of having the lesion, on pain of making incorrect predictions.
On the other hand, if “EDT agents” in this problem only sometimes take the action recommended by EDT, and the rest of the time are somehow influenced by the presence or absence of the lesion, then the description of the problem that says that the node controlled by your decision theory is “decision to smoke” would seem to be wrong to begin with. (These EDT agents will predict that P(I smoke | I smoke) = 1 and be horribly suprised.)
This is correct. You can remove the causality from a causal network and just use EDT on a joint probability distribution at the cost of increasing the number of nodes and the fan-in for each node. Since the memory requirements are exponential in fan-in and linear in number of nodes, this is a bad idea.
This is something I can believe, though it is not a correctness argument. Certainly it’s plausible that in many scenarios it is computationally more convenient to apply CDT directly than to use a fully general model that has been taught about the same structure that CDT assumes.
For example, how can any agents in the smoking lesion problem be EDT agents at all?
In the statement of the smoking lesion problem I prefer, you have lots of observational data on people whose decision theory is unknown, but whose bodies are similar enough to yours that you think the things that give or don’t give them cancer will have the same effect on you. You also don’t know whether or not you have the lesion; a sensible prior is the population prevalence of the lesion.
Now it looks like we have a few options.
We only condition on data that’s narrowly similar. Here, that might mean only conditioning on other agents who use EDT- which would result in us having no data!
We condition on data that’s broadly similar, keeping the original correlations.
We condition on data that’s broadly similar, but try to break some of the original correlations.
Option 1 is unworkable. Option 2 is what I call ‘standard EDT,’ and it fails on the smoking lesion. Option 3 is generally the one EDTers use to rescue EDT from the smoking lesion. But the issue is that EDT gives you no guidance on which of the correlations to break; you have to figure it out from the problem description. One might expect that sitting down and working out whether or not to smoke using math breaks the correlation between smoking and having the lesion, as most people don’t do that. But should we also break the negative correlation between smoking and cancer conditional on lesion status? From the English names, we can probably get those right. If they’re unlabeled columns in a matrix or nodes in a graph, we’ll have trouble.
That work still has to be done somewhere, obviously; in CDT it’s done when one condenses the problem statement down to a causal network. (And CDTers historically being wrong on Newcomb’s is an example of what doing this work wrong looks like.) But putting work where it belongs and having good interfaces between your modules is a good idea, and I think this is a place where CDT does solidly better than EDT.
Certainly it’s plausible that in many scenarios it is computationally more convenient to apply CDT directly than to use a fully general model that has been taught about the same structure that CDT assumes.
I do think the linked Graham article is well worth reading; that all languages necessarily turn into machine code does not mean all languages are equally good for thinking in. Thinking in a more powerful language lets you have more powerful thoughts.
Smoking lesion is a problem with a logical contradiction in it. The decision is simultaneously a consequence of the lesion, and of the decision theory’s output (but not one of it’s inputs, such as e.g. the desire to smoke, in which case it’s this desire that will correlate, and conditional on that desire, the decision itself won’t).
edit: smoking lesion problem seems more interesting from psychological perspective. Perhaps it is difficult to detect internal contradictions within a hypothetical that asserts an untruth—any “this smells fishy” feeling is mis-attributed to the tension between the fact of how smoking kills and the hypothetical genetics.
It could, thus, be very useful to come up with a real world example instead of using such hypotheticals.
In traditional decision theory as proposed by bayesians such as Jaynes, you always condition on all observed data. The thing that tells you whether any of this observed data is actually relevant is your model, and it does this by outputting a joint probability distribution for your situation conditional on all that data. (What I mean by “model” here is expressed in the language of probability as a prior joint distribution P(your situation × dataset | model), or equivalently a conditional distribution P(your situation | dataset, model) if you don’t care about computing the prior probabilities of your data.)
Option 2 is what I call “blindly importing related historical data as if it was a true description of your situation”. Clearly any model that says that the joint probability for your situation is identically equal to the empirical frequencies in any random data set is wrong.
From the English names, we can probably get those right. If they’re unlabeled columns in a matrix or nodes in a graph, we’ll have trouble.
The point is, it’s not about figuring stuff out from English names. It’s about having a model that correctly generalises from observed data to predictions. Unlabeled columns in a matrix are no trouble at all if your model relates them to the nodes in your personal situation in the right way.
The CDT solution of turning the problem into a causal graph and calculating probabilities with do(·) is effectively just such a model, that admittedly happens to be an elegant and convenient one. Here the information that allows you to generalise from observed data to make personal predictions is introduced when you use your human intelligence to figure out a causal graph for the situation.
Still, none of this addresses the issue that the problem itself is underspecified.
ETA: Lest you think I’ve just said that CDT is better than EDT, the point I’m trying to make here is that if you want a decision theory to generalise from data, you need to provide a model. “Your situation has the same probabilities as a causal intervention on this causal graph on that dataset, where nodes {A, B, C, …} match up to nodes {X, Y, Z, …}” is as good a model as any, and can certainly be used in EDT. The fact that EDT doesn’t come “model included” is a feature, not a bug.
Option 2 is what I call “blindly importing related historical data as if it was a true description of your situation”. Clearly any model that says that the joint probability for your situation is identically equal to the empirical frequencies in any random data set is wrong.
Agreed that this is a bad idea. I think where we disagree is that I don’t see EDT as discouraging this. It doesn’t even throw a type error when you give it blindly imported related historical data! CDT encourages you to actually think about causality before making any decisions.
It’s about having a model that correctly generalises from observed data to predictions.
Note that decision theory does actually serve a slightly different role from a general prediction module, because it should be built specifically for counterfactual reasoning. The five-and-ten argument seems to be an example of this: if while observing another agent, you see them choose $5 over $10, it could be reasonable to update towards them preferring $5 to $10. If considering the hypothetical situation where you choose $5 instead of $10, it does not make sense to update towards yourself preferring $5 to $10, or to draw whatever conclusion you like by the principle of explosion.
that admittedly happens to be an elegant and convenient one.
Given that you can emulate one system using the other, I think that elegance and convenience are the criteria we should use to choose between them. Note that emulating a joint probability without causal knowledge using a causal network is trivial- you just use undirected edges for any correlations- but emulating a causal network using a joint probability is difficult.
“Your situation has the same probabilities as a causal intervention on this causal graph on that dataset, where nodes {A, B, C, …} match up to nodes {X, Y, Z, …}” is as good a model as any, and can certainly be used in EDT. The fact that EDT doesn’t come “model included” is a feature, not a bug.
Imagine, instead of the smoking lesion, a “death paradox lesion”, Statistical analysis has shown that this lesion is associated with early death, and also that it is correlated with the ability of the agent to make correct logical decisions.
Assume you don’t want an early death. Should you conclude that you have a death paradox lesion?
There’s also the scenarion involving the EDT paradox lesion. This lesion is 1) correlated with early death, and 2) correlated with people’s use of EDT in the same way that the smoking lesion is correlated with smoking. What do you conclude and why?
In the smoking lesion variant where smoking is actually protective against cancer, but not enough to overcome the damage done by the lesion (leading to a Simpson’s Paradox), standard EDT recommends against smoking (because it increases your chance of having a lesion) and standard CDT recommends for smoking (because you sever the link to having a lesion, and so only the positive direct effect remains).
Smoking lesion problems are generally underspecified. If you can fill in additional detail, the “correct” decision changes. And I argue that a properly applied EDT outputs it.
Consider the scenario where the lesion affects your probabilty of smoking by affecting your conscious preferences. The correct decision is smoke, and EDT outputs it if you condition on the preferences.
In another scenario, an evil Omega probes you before you are born. If and only if it predicts that you will be a smoker, it puts a cancer lesion in your DNA (Omega is a good, though not necessarily perfect predictor). The cancer lesion doesn’t directly “cause” smoking, or, in the language of probability theory, it doesn’t correlate with smoking conditioned on Omega’s prediction. The correct decision is don’t smoke, and EDT outputs it since the problem is exactly isomorphic to Newcomb’s standard problem. CDT gets it wrong.
The problem is that this can lead to inconsistency when you have two omegas trying to predict each other.
This is one of the arguments against the possibility of Laplace’s Demon, and I agree that a world with two Omegas is probably going to be inconsistent.
If you decide what your innards are, and not what your action is, then this matches the problem description. If you can somehow have dishonest innards (Omega thinks I’m a one-boxer, then I can two-box), then this again violates the perfect prediction assumption.
I believe, as an empirical question, the first explicitly CDT accounts of Newcomb’s problem did not use graphs, but if you convert their argument into a graph, it implicitly assumes “B → M ← P.”
Isn’t the whole point of CDT that you cut any arrows from ancestor nodes with do(A) where A is your “intervention”? Obviously you can’t have your innards imply your action if you explicitly violate that connection by describing your decision as an intervention.
Here is how I understood typical CDT accounts of Newcomb’s problem: You have a graph given by
B <- Innards -> P
andB -> M <- P
.Innards
starts with some arbitrary prior probability since you don’t know your decision beforehand. You perturb the graph by deletingInnards -> B
in order to calculatep(M | do(B))
, and in doing so you end up with a graph “looking like”B -> M <- P
. Then the usual “dominance” arguments determine the decision regardless of the prior probability onInnards
.Of course, after doing this analysis and coming up with a decision you now know (unconditionally) the value of
B
and thereforeInnards
, so arguably the probabilities for those should be set to 1 or 0 as appropriate in the original graph. This is generally interpreted by CDTists as a proof that this agent always two-boxes, and always gets the smaller reward.Yes. My point is that when you have a supernatural Omega, then putting any of Omega’s actions in ancestor nodes of your decisions, instead of descendant nodes of your decisions, is a mistake that violates the problem description.
But if you don’t delete the incoming arches on your decision nodes then it isn’t CDT anymore, it’s just EDT.
Which begs the question of why we should bother with CDT in the first place.
Some people claim that EDT fails at “smoking lesion” type of problems, but I think it is due to incorrect modelling or underspecification of the problem. If you use the correct model EDT produces the “right” answer.
It seems to me that EDT is superior to CDT.
(Ilya Shpitser will disagree, but I never understood his arguments)
People have known how to deal with smoking lesion (under a different name) since the 18th century (hint: the solution is not the EDT solution):
http://www.e-publications.org/ims/submission/STS/user/submissionFile/12809?confirm=bbb928f0
The trick is to construct a system that deals with things 20 times more complicated than smoking lesion. That system is recent, and you will have to read e.g. my thesis, or Jin Tian’s thesis, or elsewhere to see what it is.
I have yet to see anyone advocating EDT actually handle a complicated example correctly. Or even a simple tricky example, e.g. the front door case.
You still delete incoming arcs when you make a decision. The argument is that if Omega perfectly predicts your decision, then causally his prediction must be a descendant of your decision, rather than an ancestor, because if it were an ancestor you would sever the connection that is still solid (and thus violate the problem description).
This is a shame, because he’s right. Here’s my brief attempt at an explanation of the difference between the two:
EDT uses the joint probability distribution. If you want to express a joint probability distribution as a graphical Bayesian network, then the direction of the arrows doesn’t matter (modulo some consistency concerns). If you utilize your human intelligence, you might be able to figure out “okay, for this particular action, we condition on X but not on Y,” but you do this for intuitive reasons that may be hard to formalize and which you might get wrong. When you use the joint probability distribution, you inherently assume that all correlation is causation, unless you’ve specifically added a node or data to block causation for any particular correlation.
CDT uses the causal network, where the direction of the arrows is informative. You can tell the difference between altering and observing something, in that observations condition things both up and down the causal graph, whereas alterations only condition things down the causal graph. You only need to use your human intelligence to build the right graph, and then the math can take over from there. For example, consider price controls: there’s a difference between observing that the price of an ounce of gold is $100 and altering the price of an ounce of gold to be $100. And causal networks allow you to answer questions like “given that the price of gold is observed to be $100, what will happen when we force the price of gold to be $120?”
Now, if you look at the math, you can see a way to embed a causal network in a network without causation. So we could use more complicated networks and let conditioning on nodes do the graph severing for us. I think this is a terrible idea, both philosophically and computationally, because it entails more work and less clarity, both of which are changes in the wrong direction.
If I understand correctly, in causal networks the orientation of the arches must respect “physical causality”, which I roughly understand to mean consistency with the thermodynamical arrow of time.
There is no way for your action to cause Omega’s prediction in this sense, unless time travel is involved.
Yes, different Bayesian networks can represent the same probability distribution. And why would that be a problem? The probability distribution and your utility function are all that matters.
“Correlation vs causation” is an epistemic error. If you are making it then you are using the wrong probability distribution, not a “wrong” factorization of the correct probability distribution.
In the real world, this is correct, but it is not mathematically necessary. (To go up a meta level, this is about how you build causal networks in the first place, not about how you reason once you have a causal network; even if philosophers were right about CDT as the method to go from causal networks to decisions, they seem to have been confused about the method by which one goes from English problem statements to causal networks when it comes to Newcomb’s problem.)
It is. How else can Omega be a perfect predictor? (I may be stretching the language, but I count Laplace’s Demon as a time traveler, since it can ‘see’ the world at any time, even though it can only affect the world at the time that it’s at.)
The problem is that you can’t put any meaning into the direction of the arrows because they’re arbitrary.
If you give me a causal diagram and the embedded probabilities for the environment, and ask me to predict what would happen if you did action A (i.e. counterfactual reasoning), you’ve already given me all I need to calculate the probabilities of any of the other nodes you might be interested in, for any action included in the environment description.
If you give me a joint probability distribution for the environment, and ask me to predict what would happen if you did action A, I don’t have enough information to calculate the probabilities of the other nodes. You need to give me a different joint probability distribution for every possible action you could take. This requires a painful amount of communication, but possibly worse is that there’s no obvious type difference between the joint probability distribution for the environment and for the environment given a particular action—and if I calculate the consequences of an action given the whole environment’s data, I can get it wrong.
If you take physical causality out of the picture, then the arches orientation is underspecified in the general case. But then, since you are only allowed to cut arches that are incoming to the decision nodes, your decision model will be underspecified.
If you are going to allow time travel, defined in a broad sense, then your casual network will have cycles.
But the point is that in EDT you don’t care about the direction of the arrows.
If I give you a casual diagram for Newcomb’s problem (or some variation of thereof) you will make a wrong prediction, because causal diagrams can’t properly represent it.
If the model includes the myself as well as the environment, you will be able to make the correct prediction.
Of course, if you give this prediction back to me, and it influences my decision, then the model has to include you as well. Which may, in principle, cause Godelian self-reference issues. But that’s a fundamental limit of the logic capabilities of any computable system, there are no easy ways around it.
But that’s not as bad as it sounds: the fact that you can’t precisely predict everything about yourself doesn’t mean that you can’t predict anything or that you can’t make approximate predictions.
(for instance, GCC can compile and optimize GCC)
Causal decision models are one way to approximate hard decision problems, and they work well in many practical cases. Newcomb-like scenarios are specifically designed to make them fail.
Yes, and because EDT does not assign meaning to the direction of the arrows is why it’s a less powerful language for describing environments.
If you allow retrocausation, I don’t see why you think this is the case.
I’m not convinced that this is the case.
Arrow orientation is an artifact of Bayesian networks, not a funamental property of the world.
! Causation going in one direction (if the nodes are properly defined) does appear to be a fundamental property of the real world.
I’m not sure what we are disagreeing about.
In CDT you need causal Bayesian networks where the arrow orientation reflects physical causality.
In EDT you just need probability distributions. You can represent them as Bayesian networks, but in this case arrow direction doesn’t matter, up to certain consistency constraints.
Why would EDT not having causal arrows be a problem?
Because the point of making decisions is to cause things to happen, and so encoding information about causality is a good idea.
Disagree. The directionality of causation appears to be a consequence of the Second Law of Thermodynamics, which is not a fundamental law.
All the microscopic laws are completely compatible with there being a region of space-time more or less like ours, but in reverse, with entropy decreasing monotonically. In fact, in a sufficiently large world, such a region is to be expected, since the Second Law is probabilistic. In this region, matches will light before (from our perspective) they are struck, and ripples in a pond will coalesce to a single point and eject a rock from the pond. If we use nodes similar to the ones we do in our environment, then in order to preserve the Causal Markov Condition, we would have to draw arrows in the opposite temporal direction.
Causation is not a useful concept when we’re talking about the fundamental level of nature, precisely because all fundamental interactions (with some very obscure exceptions) are completely time-symmetric. Causation (and the whole DAG framework) becomes useful when we move to the macroscopic world of temporally asymmetric phenomena. And the temporal asymmetry is just a manifestation of the Second Law.
Assuming CPT symmetry, the very reason why there’s still matter in the universe (as opposed to it all having annihilated with antimatter) in the first place must be one of those very obscure exceptions.
It’s true that CP-violations appear to be a necessary condition for the baryon asymmetry (if you make certain natural-seeming assumptions). It’s another question whether the observed CP-violations are sufficient for the asymmetry, if the other Sakharov conditions are met. And one of the open problems in contemporary cosmology is precisely that they don’t appear to be sufficient, that the subtle CP-violations we have observed so far (only in four types of mesons) are too subtle to account for the huge asymmetry between matter and anti-matter. They would only account for a tiny amount of that asymmetry. So, yeah, the actual violations of T-symmetry we see are in fact obscure exceptions. They are not sufficient to account for either the pervasive time asymmetry of macroscopic phenomena or the pervasive baryon asymmetry at the microscopic level. There are two ways to go from here: either there must be much more significant CP-violations that we haven’t yet been able to observe, or the whole Sakharov approach of accounting for the baryon asymmetry dynamically is wrong, and we have to turn to another kind of explanation (anthropic, maybe?). The latter option is what we have settled on when it comes to time asymmetry—we have realized that a fundamental single-universe dynamical explanation for the Second Law is not on the cards—and it may well turn out to be the right option for the baryon asymmetry as well.
It’s also worth noting that CP-violations by themselves would be insufficient to account for the asymmetry, even if they were less obscure than they appear to be. You also need the Second Law of Thermodynamics (this is the third Sakharov condition). In thermodynamic equilibrium any imbalance between matter and anti-matter generated by CP-violating interactions would be undone.
In any case, even if it turns out that CP-violating interactions are plentiful enough to account for the baryon asymmetry, they still could not possibly account for macroscopic temporal asymmetry. The particular sort of temporal asymmetry we see in the macroscopic world involves the disappearance of macroscopically available information. Microscopic CP-violations are information-preserving (they are CPT symmetric), so they cannot account for this type of asymmetry. If there is going to be a fundamental explanation for the arrow of time it would have to involve laws that don’t preserve information. The only serious candidate for this so far is (real, not instrumental) wavefunction collapse, and we all know how that theory is regarded around these parts.
I should make clear that by ‘fundamental’ I was not speaking in terms of physics, but in terms of decision theory, where causation does seem to be of central importance.
This reads to me like “conditioning on us being in a weird part of the universe where less likely events are more likely, then when we apply the assumption that we’re in a normal part of the universe where more likely events are more likely we get weird results.” And, yes, I agree with that reading, and I’m not sure what you want that to imply.
I wanted to imply that the temporal directionality of causation is a consequence of the Second Law of Thermodynamics. I guess the point would be that the “less likely” and “more likely” in your gloss are only correct if you restrict yourself to a macroscopic level of description. Described microscopically, both regions are equally likely, according to standard statistical mechanics. This is related to the idea that non-fundamental macroscopic factors make a difference when it comes to the direction of causal influence.
But yeah, this was based on misreading your use of “fundamental” as referring to physical fundamentality. If you meant decision-theoretically fundamental, then I agree with you. I thought you were espousing the Yudkowsky-esque line that causal relations are part of the fundamental furniture of the universe and that the Causal Markov Condition is deeper and more fundamental than the Second Law of Thermodynamics.
The point is here is that if you have the correct probability distribution, all its predictions will be correct (ie. have minimum expected regret). It seems that the difference between epistemology and decision theory can’t be emphasized enough. If it’s possible for your “mixing up correlation and causation” to result in you making an incorrect prediction and being surprised (when a different prediction would have been systematically more accurate), then there must be an error in your probability distribution.
But an arbitrary joint probability distribution can assign
P(stuff | action=A)
to any values whatsoever. What stops you from just setting all conditional probabilities to the correct values (ie. those values such that they “predict what would happen if you did action A” correctly, which would be the output ofP(stuff|do(A))
on the “correct” causal graph)?And furthermore, if that joint distribution does make optimal predictions (assuming that this “counterfactual reasoning” results in optimal predictions, because I can’t see any other reason you’d use a set of probabilities), then clearly it must be the probability distribution that is mandated by Cox’s theorem, etc etc.
Note, there is a free variable in the above, which is the unconditional probabilities
P(A)
. But as long as the optimalP(A)
values are all nonzero (which is the case if you don’t know the agent’s algorithm, for example), the optimality of the joint distribution requiresP(stuff|A)
to be correct.So it would seem like if you have the correct probablity distribution, you can predict what would happen if I did action A, by virtue of me giving you the answers. Unless I’ve made a fatal mistake in the above argument.
In the smoking lesion variant where smoking is actually protective against cancer, but not enough to overcome the damage done by the lesion (leading to a Simpson’s Paradox), standard EDT recommends against smoking (because it increases your chance of having a lesion) and standard CDT recommends for smoking (because you sever the link to having a lesion, and so only the positive direct effect remains). They give different estimates of difference of probability of getting cancer given that you chose to start smoking and the probability of getting cancer given that you chose to not smoke, because EDT doesn’t natively understand the difference between “are a smoker” and “chose to start smoking.” If you understand the difference, you can fudge things so that EDT works while you’re actively putting effort into it.
This is correct. You can remove the causality from a causal network and just use EDT on a joint probability distribution at the cost of increasing the number of nodes and the fan-in for each node. Since the memory requirements are exponential in fan-in and linear in number of nodes, this is a bad idea.
Besides the memory requirements, this adds another problem: in a causal network, we share parameters that are not shared in the ‘decaused’ network. This is necessary in order to be able to represent all possible mutilated graphs as marginals of the joint probability distribution, but means that if we’re trying to learn the parameters from observational data instead of getting from another source, we need much more data to get estimates that are as good. We can apply equality constraints, but then we might as well use CDT because we’re either using the equality constraints implied by CDT (and are thus correct) or we screwed something up.
There also seem to be numerous philosophical benefits to using the language of counterfactuals and conditionals, over just the language of conditionals. Causal networks really are more powerful, in the sense that Paul Graham describes here.
If you give me a joint probability distribution which I can marginalize over any possible action, yes, I can do those predictions because you gave me the answers.
But what use is an algorithm that, when you give it the answers, merely doesn’t destroy them? We want something that takes environments as inputs and outputs decisions as outputs, because then it will do the work for us.
I tend to be sceptical of smoking lesion arguments on account of how the scenario seems be always either underspecified or contradictory. For example, how can any agents in the smoking lesion problem be EDT agents at all?
If they always take the action recommended by EDT, and there is exactly one such action, then they must all take the same action. But in that case there can’t possibly be the postulated connection between the lesion and smoking (conditional on being an EDT agent). So an EDT agent that knows it implements EDT can’t believe that its decision to smoke affects the chances of having the lesion, on pain of making incorrect predictions.
On the other hand, if “EDT agents” in this problem only sometimes take the action recommended by EDT, and the rest of the time are somehow influenced by the presence or absence of the lesion, then the description of the problem that says that the node controlled by your decision theory is “decision to smoke” would seem to be wrong to begin with. (These EDT agents will predict that
P(I smoke | I smoke) = 1
and be horribly suprised.)This is something I can believe, though it is not a correctness argument. Certainly it’s plausible that in many scenarios it is computationally more convenient to apply CDT directly than to use a fully general model that has been taught about the same structure that CDT assumes.
In the statement of the smoking lesion problem I prefer, you have lots of observational data on people whose decision theory is unknown, but whose bodies are similar enough to yours that you think the things that give or don’t give them cancer will have the same effect on you. You also don’t know whether or not you have the lesion; a sensible prior is the population prevalence of the lesion.
Now it looks like we have a few options.
We only condition on data that’s narrowly similar. Here, that might mean only conditioning on other agents who use EDT- which would result in us having no data!
We condition on data that’s broadly similar, keeping the original correlations.
We condition on data that’s broadly similar, but try to break some of the original correlations.
Option 1 is unworkable. Option 2 is what I call ‘standard EDT,’ and it fails on the smoking lesion. Option 3 is generally the one EDTers use to rescue EDT from the smoking lesion. But the issue is that EDT gives you no guidance on which of the correlations to break; you have to figure it out from the problem description. One might expect that sitting down and working out whether or not to smoke using math breaks the correlation between smoking and having the lesion, as most people don’t do that. But should we also break the negative correlation between smoking and cancer conditional on lesion status? From the English names, we can probably get those right. If they’re unlabeled columns in a matrix or nodes in a graph, we’ll have trouble.
That work still has to be done somewhere, obviously; in CDT it’s done when one condenses the problem statement down to a causal network. (And CDTers historically being wrong on Newcomb’s is an example of what doing this work wrong looks like.) But putting work where it belongs and having good interfaces between your modules is a good idea, and I think this is a place where CDT does solidly better than EDT.
I do think the linked Graham article is well worth reading; that all languages necessarily turn into machine code does not mean all languages are equally good for thinking in. Thinking in a more powerful language lets you have more powerful thoughts.
Smoking lesion is a problem with a logical contradiction in it. The decision is simultaneously a consequence of the lesion, and of the decision theory’s output (but not one of it’s inputs, such as e.g. the desire to smoke, in which case it’s this desire that will correlate, and conditional on that desire, the decision itself won’t).
edit: smoking lesion problem seems more interesting from psychological perspective. Perhaps it is difficult to detect internal contradictions within a hypothetical that asserts an untruth—any “this smells fishy” feeling is mis-attributed to the tension between the fact of how smoking kills and the hypothetical genetics.
It could, thus, be very useful to come up with a real world example instead of using such hypotheticals.
In traditional decision theory as proposed by bayesians such as Jaynes, you always condition on all observed data. The thing that tells you whether any of this observed data is actually relevant is your model, and it does this by outputting a joint probability distribution for your situation conditional on all that data. (What I mean by “model” here is expressed in the language of probability as a prior joint distribution
P(your situation × dataset | model)
, or equivalently a conditional distributionP(your situation | dataset, model)
if you don’t care about computing the prior probabilities of your data.)Option 2 is what I call “blindly importing related historical data as if it was a true description of your situation”. Clearly any model that says that the joint probability for your situation is identically equal to the empirical frequencies in any random data set is wrong.
The point is, it’s not about figuring stuff out from English names. It’s about having a model that correctly generalises from observed data to predictions. Unlabeled columns in a matrix are no trouble at all if your model relates them to the nodes in your personal situation in the right way.
The CDT solution of turning the problem into a causal graph and calculating probabilities with
do(·)
is effectively just such a model, that admittedly happens to be an elegant and convenient one. Here the information that allows you to generalise from observed data to make personal predictions is introduced when you use your human intelligence to figure out a causal graph for the situation.Still, none of this addresses the issue that the problem itself is underspecified.
ETA: Lest you think I’ve just said that CDT is better than EDT, the point I’m trying to make here is that if you want a decision theory to generalise from data, you need to provide a model. “Your situation has the same probabilities as a causal intervention on this causal graph on that dataset, where nodes {A, B, C, …} match up to nodes {X, Y, Z, …}” is as good a model as any, and can certainly be used in EDT. The fact that EDT doesn’t come “model included” is a feature, not a bug.
Agreed that this is a bad idea. I think where we disagree is that I don’t see EDT as discouraging this. It doesn’t even throw a type error when you give it blindly imported related historical data! CDT encourages you to actually think about causality before making any decisions.
Note that decision theory does actually serve a slightly different role from a general prediction module, because it should be built specifically for counterfactual reasoning. The five-and-ten argument seems to be an example of this: if while observing another agent, you see them choose $5 over $10, it could be reasonable to update towards them preferring $5 to $10. If considering the hypothetical situation where you choose $5 instead of $10, it does not make sense to update towards yourself preferring $5 to $10, or to draw whatever conclusion you like by the principle of explosion.
Given that you can emulate one system using the other, I think that elegance and convenience are the criteria we should use to choose between them. Note that emulating a joint probability without causal knowledge using a causal network is trivial- you just use undirected edges for any correlations- but emulating a causal network using a joint probability is difficult.
Precisely.
Imagine, instead of the smoking lesion, a “death paradox lesion”, Statistical analysis has shown that this lesion is associated with early death, and also that it is correlated with the ability of the agent to make correct logical decisions.
Assume you don’t want an early death. Should you conclude that you have a death paradox lesion?
There’s also the scenarion involving the EDT paradox lesion. This lesion is 1) correlated with early death, and 2) correlated with people’s use of EDT in the same way that the smoking lesion is correlated with smoking. What do you conclude and why?
I don’t understand most of your position on EDT/CDT, but I especially don’t understand how
follows from the previous sentence.
I also thought P(A|A)=1 followed from the axioms of probability.
Smoking lesion problems are generally underspecified. If you can fill in additional detail, the “correct” decision changes. And I argue that a properly applied EDT outputs it.
Consider the scenario where the lesion affects your probabilty of smoking by affecting your conscious preferences.
The correct decision is smoke, and EDT outputs it if you condition on the preferences.
In another scenario, an evil Omega probes you before you are born. If and only if it predicts that you will be a smoker, it puts a cancer lesion in your DNA (Omega is a good, though not necessarily perfect predictor).
The cancer lesion doesn’t directly “cause” smoking, or, in the language of probability theory, it doesn’t correlate with smoking conditioned on Omega’s prediction.
The correct decision is don’t smoke, and EDT outputs it since the problem is exactly isomorphic to Newcomb’s standard problem. CDT gets it wrong.
The problem is that this can lead to inconsistency when you have two omegas trying to predict each other.
This is one of the arguments against the possibility of Laplace’s Demon, and I agree that a world with two Omegas is probably going to be inconsistent.
It should be noted that this also makes transparent Newcomb ill-posed because the transparent boxes make the box-picker essentially an omega.