It’s not obvious to me what you mean by “mysterious correlations” and “straightforward correlations.” Correlations are statistical objects that either exist or don’t, and I don’t know what conscious decision based on observations you’re referring to in the smoking lesion problem. What makes the smoking lesion problem a problem is that the lesions are unobserved.
Well, the correlations in the smoking lesion problem are mysterious because they aren’t caused by agents observing lesion|no-lesion and deciding whether to smoke based on that. They are mysterious because it is simply postulated that “the lesion causes smoking without being observed” without any explanation of how, and it is generally assumed that the correlation somehow still applies when you’re deciding what to do using EDT, which I personally have some doubt about (EDT decides what to do based only on preferences and observations, so how can its output be correlated to anything else?).
Straightforward correlations are those where, for example, people go out with an umbrella if they see rain clouds forming. The correlation is created by straightforward decision-making based on observations. Simple statistical reasoning suggests that you only have reason to expect these correlations to hold for an EDT agent if the EDT agent makes the same decisions in the same situations. Furthermore, these correlations tend to pose no problem for EDT because the only time an EDT agent is in a position to take an action correlated to some observation in this way (“I observe rain clouds, should I take my umbrella?”), they must have already observed the correlate (“rain clouds”), so EDT makes no attempt to influence it (“whether or not I take my umbrella, I know there are rain clouds already”) .
Returning to the smoking lesion problem, there are a few ways of making the mystery go away. You can suppose that the lesion works by making you smoke even after you (consciously) decide to do something else. In this case the decision of the EDT agent isn’t actually smoke | don't-smoke, but rather you get to decide a parameter of something else that determines whether you smoke. This makes the lesion not actually a cause of your decision, so you choose-to-smoke, obviously.
Alternatively, I was going to analyse the situation where the lesion makes you want to smoke (by altering your decision theory/preferences), but it made my head hurt. I anticipate that EDT wouldn’t smoke in that situation iff you can somehow remain ignorant of your decision or utility function even while implementing EDT, but I can’t be sure.
Basically, the causal reasons behind your data (why do people always get up after 7AM?) matter, because they determine what kind of causal graph you can infer for the situation with an EDT agent with some given set of observations, as opposed to whatever agents are in the dataset.
Postscript regarding LCPW: If I’m trying to argue that EDT doesn’t normally break, then presenting a situation where it does break isn’t necessarily proper LCPW. Because I never argued that it always did the right thing (which would require me to handle edge cases).
it is simply postulated that “the lesion causes smoking without being observed” without any explanation of how
No mathematical decision theory requires verbal explanations to be part of the model that it operates on. (It’s true that when learning a causal model from data, you need causal assumptions; but when a problem provides the model rather than the data, this is not necessary.)
it is generally assumed that the correlation somehow still applies when you’re deciding what to do using EDT, which I personally have some doubt about
You have doubt that this is how EDT, as a mathematical algorithm, operates, or you have some doubt that this is a wise way to construct a decision-making algorithm?
If the second, this is why I think EDT is a subpar decision theory. It sees the world as a joint probability distribution, and does not have the ability to distinguish correlation and causation, which means it cannot know whether or not a correlation applies for any particular action (and so assumes that all do).
If the first, I’m not sure how to clear up your confusion. There is a mindset that programming cultivates, which is that the system does exactly what you tell it to, with the corollary that your intentions have no weight.
If I’m trying to argue that EDT doesn’t normally break, then presenting a situation where it does break isn’t necessarily proper LCPW.
The trouble with LCPW is that it’s asymmetric; Eliezer claims that the LCPW is the one where his friend has to face a moral question, and Eliezer’s friend might claim that the LCPW is the one where Eliezer has to face a practical problem.
The way to break the asymmetry is to try to find the most informative comparison. If the hypothetical has been fought, then we learn nothing about morality, because there is no moral problem. If the hypothetical is accepted despite faults, then we learn quite a bit about morality.
The issues with EDT might require ‘edge cases’ to make obvious, but in the same way that the issues with Newtonian dynamics might require ‘edge cases’ to make obvious.
No mathematical decision theory requires verbal explanations to be part of the model that it operates on. (It’s true that when learning a causal model from data, you need causal assumptions; but when a problem provides the model rather than the data, this is not necessary.)
What I’m saying is that the only way to solve any decision theory problem is to learn a causal model from data. It just doesn’t make sense to postulate particular correlations between an EDT agent’s decisions and other things before you even know what EDT decides! The only reason you get away with assuming graphs like lesion -> (CDT Agent) -> action for CDT is because the first thing CDT does when calculating a decision is break all connections to parents by means of do(...).
Take Jiro’s example. The lesion makes people jump into volcanoes. 100% of them, and no-one else. Furthermore, I’ll postulate that all of them are using decision theory “check if I have the lesion, if so, jump into a volcano, otherwise don’t”. Should you infer the causal graph lesion -> (EDT decision: jump?) -> die with a perfect correlation between lesion and jump? (Hint: no, that would be stupid, since we’re not using jump-based-on-lesion-decision-theory, we’re using EDT.)
There is a mindset that programming cultivates, which is that the system does exactly what you tell it to, with the corollary that your intentions have no weight.
In programming, we also say “garbage in, garbage out”. You are feeding EDT garbage input by giving it factually wrong joint probability distributions.
Ok, what about cases where there are multiple causal hypotheses that are observationally indistinguishable:
a → b → c
vs
a ← b ← c
Both models imply the same joint probability distribution p(a,b,c) with a single conditional independence (a independent of c given b) and cannot be told apart without experimentation. That is, you cannot call p(a,b,c) “factually wrong” because the correct causal model implies it. But the wrong causal model implies it too! To figure out which is which requires causal information. You can give it to EDT and it will work—but then it’s not EDT anymore.
I can give you a graph which implies the same independences as my HAART example but has a completely different causal structure, and the procedure you propose here:
will give the right answer in one case and the wrong answer in another.
The point is, EDT lacks a rich enough input language to avoid getting garbage inputs in lots of standard cases. Or, more precisely, EDT lacks a rich enough input languages to tell when input is garbage and when it isn’t. This is why EDT is a terrible decision theory.
What I’m saying is that the only way to solve any decision theory problem is to learn a causal model from data.
I think there are a couple of confusions this sentence highlights.
First, there are approaches to solving decision theory problems that don’t use causal models. Part of what has made this conversation challenging is that there are several different ways to represent the world- and so even if CDT is the best / natural one, it needs to be distinguished from other approaches. EDT is not CDT in disguise; the two are distinct formulas / approaches.
Second, there are good reasons to modularize the components of the decision theory, so that you can treat learning a model from data separately from making a decision given a model. An algorithm to turn models into decisions should be able to operate on an arbitrary model, where it sees a → b → c as isomorphic to Drunk → Fall → Death.
To tell an anecdote, when my decision analysis professor would teach that subject to petroleum engineers, he quickly learned not to use petroleum examples. Say something like “suppose the probability of striking oil by drilling a well here is 40%” and an engineer’s hand will shoot up, asking “what kind of rock is it?”. The kind of rock is useful for determining whether or not the probability is 40% or something else, but the question totally misses the point of what the professor is trying to teach. The primary example he uses is choosing a location for a party subject to the uncertainty of the weather.
It just doesn’t make sense to postulate particular correlations between an EDT agent’s decisions and other things before you even know what EDT decides!
I’m not sure how to interpret this sentence.
The way EDT operates is to perform the following three steps for each possible action in turn:
Assume that I saw myself doing X.
Perform a Bayesian update on this new evidence.
Calculate and record my utility.
It then chooses the possible action which had the highest calculated utility.
One interpretation is you saying that EDT doesn’t make sense, but I’m not sure I agree with what seems to be the stated reason. It looks to me like you’re saying “it doesn’t make sense to assume that you do X until you know what you decide!”, when I think that does make sense, but the problem is using that assumption as Bayesian evidence as if it were an observation.
The way EDT operates is to perform the following three steps for each possible action in turn:
Assume that I saw myself doing X.
Perform a Bayesian update on this new evidence.
Calculate and record my utility.
Ideal Bayesian updates assume logical omniscience, right? Including knowledge about logical fact of what EDT would do for any given input. If you know that you are an EDT agent, and condition on all of your past observations and also on the fact that you do X, but X is not in fact what EDT does given those inputs, then as an ideal Bayesian you will know that you’re conditioning on something impossible. More generally, what update you perform in step 2 depends on EDT’s input-output map, thus making the definition circular.
So, is EDT really underspecified? Or are you supposed to search for a fixed point of the circular definition, if there is one? Or does it use some method other than Bayes for the hypothetical update? Or does an EDT agent really break if it ever finds out its own decision algorithm? Or did I totally misunderstand?
Ideal Bayesian updates assume logical omniscience, right? Including knowledge about logical fact of what EDT would do for any given input.
Note that step 1 is “Assume that I saw myself doing X,” not “Assume that EDT outputs X as the optimal action.” I believe that excludes any contradictions along those lines. Does logical omniscience preclude imagining counterfactual worlds?
If I already know “I am EDT”, then “I saw myself doing X” does imply “EDT outputs X as the optimal action”. Logical omniscience doesn’t preclude imagining counterfactual worlds, but imagining counterfactual worlds is a different operation than performing Bayesian updates. CDT constructs counterfactuals by severing some of the edges in its causal graph and then assuming certain values for the nodes that no longer have any causes. TDT does too, except with a different graph and a different choice of edges to sever.
I don’t know how I can fail to communicate so consistently.
Yes, you can technically apply “EDT” to any causal model or (more generally) joint probability distribution containing a “EDT agent decision” node. But in practice this freedom is useless, because to derive an accurate model you generally need to take account of a) the fact that the agent is using EDT and b) any observations the agent does or does not make. To be clear, the input EDT requires is a probabilistic model describing the EDT agent’s situation (not describing historical data of “similar” situations).
There are people here trying to argue against EDT by taking a model describing historical data (such as people following dumb decision theories jumping into volcanoes) and feeding this model directly into EDT. Which is simply wrong. A model that describes the historical behaviour of agents using some other decision theory does not in general accurately describe an EDT agent in the same situation.
The fact that this egregious mistake looks perfectly normal is an artifact of the fact that CDT doesn’t care about causal parents of the “CDT decision” node.
I don’t know how I can fail to communicate so consistently.
I suspect it’s because what you are referring to as “EDT” is not what experts in the field use that technical term to mean.
nsheppard-EDT is, as far as I can tell, the second half of CDT. Take a causal model and use the do() operator to create the manipulated subgraph that would result taking possible action (as an intervention). Determine the joint probability distribution from the manipulated subgraph. Condition on observing that action with the joint probability distribution, and calculate the probabilistically-weighted mean utility of the possible outcomes. This is isomorphic to CDT, and so referring to it as EDT leads to confusion.
Here’s a modified version. Instead of a smoking lesion, there’s a “jump into active volcano lesion”. Furthermore, the correlation isn’t as puny as for the smoking lesion. 100% of people with this lesion jump into active volcanoes and die, and nobody else does.
Should you go jump into an active volcano?
Using a decision theory to figure out what decision you should make assumes that you’re capable of making a decision. “The lesion causes you to jump into an active volcano/smoke” and “you can choose whether to jump into an active volcano/smoke” are contradictory. Even “the lesion is correlated (at less than 100%) with jumping into an active volcano/smoking” and “you can choose whether to jump into an active volcano/smoke” are contradictory unless “is correlated with” involves some correlation for people who don’t use decision theory and no correlation for people who do.
Using a decision theory to figure out what decision you should make assumes that you’re capable of making a decision.
Agreed.
unless “is correlated with” involves some correlation for people who don’t use decision theory and no correlation for people who do.
Doesn’t this seem sort of realistic, actually? Decisions made with System 1 and System 2, to use Kahneman’s language, might have entirely different underlying algorithms. (There is some philosophical trouble about how far we can push the idea of an ‘intervention’, but I think for human-scale decisions there is a meaningful difference between interventions and observations such that CDT distinguishing between them is a feature.)
This maps onto an objection by proponents of EDT that the observational data might not be from people using EDT, and thus the correlation may disappear when EDT comes onto the stage. I think that objection proves too much- suppose all of our observational data on the health effects of jumping off cliffs comes from subjects who were not using EDT (suppose they were drunk). I don’t see a reason inside the decision theory for differentiating between the effects of EDT on the correlation between jumping off the cliff and the effects of EDT on the correlation between smoking and having the lesion.
These two situations correspond to two different causal structures—Drunk → Fall → Death and Smoke ← Lesion → Cancer—which could have the same joint probability distribution. The directionality of the arrow is something that CDT can make use of to tell that the two situations will respond differently to interventions at Drunk and Smoke: it is dangerous to be drunk around cliffs, but not to smoke (in this hypothetical world).
EDT cannot make use of those arrows. It just has Drunk—Fall—Death and Smoke—Lesion—Cancer (where it knows that the correlations between Drunk and Death are mediated by Fall, and the correlations between Smoke and Cancer are mediated by Lesion). If we suppose that adding an EDT node might mean that the correlation between Smoke and Lesion (and thus Cancer) might be mediated by EDT, then we must also suppose that adding an EDT node might mean that the correlation between Drunk and Fall (and thus Death) might be mediated by EDT.
(I should point out that the EDT node describes whether or not EDT was used to decide to drink, not to decide whether or not to fall off the cliff, by analogy of using EDT to decide whether or not to smoke, rather than deciding whether or not to have a lesion.)
Well, the correlations in the smoking lesion problem are mysterious because they aren’t caused by agents observing
lesion|no-lesion
and deciding whether to smoke based on that. They are mysterious because it is simply postulated that “the lesion causes smoking without being observed” without any explanation of how, and it is generally assumed that the correlation somehow still applies when you’re deciding what to do using EDT, which I personally have some doubt about (EDT decides what to do based only on preferences and observations, so how can its output be correlated to anything else?).Straightforward correlations are those where, for example, people go out with an umbrella if they see rain clouds forming. The correlation is created by straightforward decision-making based on observations. Simple statistical reasoning suggests that you only have reason to expect these correlations to hold for an EDT agent if the EDT agent makes the same decisions in the same situations. Furthermore, these correlations tend to pose no problem for EDT because the only time an EDT agent is in a position to take an action correlated to some observation in this way (“I observe rain clouds, should I take my umbrella?”), they must have already observed the correlate (“rain clouds”), so EDT makes no attempt to influence it (“whether or not I take my umbrella, I know there are rain clouds already”) .
Returning to the smoking lesion problem, there are a few ways of making the mystery go away. You can suppose that the lesion works by making you smoke even after you (consciously) decide to do something else. In this case the decision of the EDT agent isn’t actually
smoke | don't-smoke
, but rather you get to decide a parameter of something else that determines whether you smoke. This makes the lesion not actually a cause of your decision, so you choose-to-smoke, obviously.Alternatively, I was going to analyse the situation where the lesion makes you want to smoke (by altering your decision theory/preferences), but it made my head hurt. I anticipate that EDT wouldn’t smoke in that situation iff you can somehow remain ignorant of your decision or utility function even while implementing EDT, but I can’t be sure.
Basically, the causal reasons behind your data (why do people always get up after 7AM?) matter, because they determine what kind of causal graph you can infer for the situation with an EDT agent with some given set of observations, as opposed to whatever agents are in the dataset.
Postscript regarding LCPW: If I’m trying to argue that EDT doesn’t normally break, then presenting a situation where it does break isn’t necessarily proper LCPW. Because I never argued that it always did the right thing (which would require me to handle edge cases).
No mathematical decision theory requires verbal explanations to be part of the model that it operates on. (It’s true that when learning a causal model from data, you need causal assumptions; but when a problem provides the model rather than the data, this is not necessary.)
You have doubt that this is how EDT, as a mathematical algorithm, operates, or you have some doubt that this is a wise way to construct a decision-making algorithm?
If the second, this is why I think EDT is a subpar decision theory. It sees the world as a joint probability distribution, and does not have the ability to distinguish correlation and causation, which means it cannot know whether or not a correlation applies for any particular action (and so assumes that all do).
If the first, I’m not sure how to clear up your confusion. There is a mindset that programming cultivates, which is that the system does exactly what you tell it to, with the corollary that your intentions have no weight.
The trouble with LCPW is that it’s asymmetric; Eliezer claims that the LCPW is the one where his friend has to face a moral question, and Eliezer’s friend might claim that the LCPW is the one where Eliezer has to face a practical problem.
The way to break the asymmetry is to try to find the most informative comparison. If the hypothetical has been fought, then we learn nothing about morality, because there is no moral problem. If the hypothetical is accepted despite faults, then we learn quite a bit about morality.
The issues with EDT might require ‘edge cases’ to make obvious, but in the same way that the issues with Newtonian dynamics might require ‘edge cases’ to make obvious.
What I’m saying is that the only way to solve any decision theory problem is to learn a causal model from data. It just doesn’t make sense to postulate particular correlations between an EDT agent’s decisions and other things before you even know what EDT decides! The only reason you get away with assuming graphs like
lesion -> (CDT Agent) -> action
for CDT is because the first thing CDT does when calculating a decision is break all connections to parents by means ofdo(...)
.Take Jiro’s example. The lesion makes people jump into volcanoes. 100% of them, and no-one else. Furthermore, I’ll postulate that all of them are using decision theory “check if I have the lesion, if so, jump into a volcano, otherwise don’t”. Should you infer the causal graph
lesion -> (EDT decision: jump?) -> die
with a perfect correlation betweenlesion
andjump
? (Hint: no, that would be stupid, since we’re not using jump-based-on-lesion-decision-theory, we’re using EDT.)In programming, we also say “garbage in, garbage out”. You are feeding EDT garbage input by giving it factually wrong joint probability distributions.
Ok, what about cases where there are multiple causal hypotheses that are observationally indistinguishable:
a → b → c
vs
a ← b ← c
Both models imply the same joint probability distribution p(a,b,c) with a single conditional independence (a independent of c given b) and cannot be told apart without experimentation. That is, you cannot call p(a,b,c) “factually wrong” because the correct causal model implies it. But the wrong causal model implies it too! To figure out which is which requires causal information. You can give it to EDT and it will work—but then it’s not EDT anymore.
I can give you a graph which implies the same independences as my HAART example but has a completely different causal structure, and the procedure you propose here:
http://lesswrong.com/lw/hwq/evidential_decision_theory_selection_bias_and/9d6f
will give the right answer in one case and the wrong answer in another.
The point is, EDT lacks a rich enough input language to avoid getting garbage inputs in lots of standard cases. Or, more precisely, EDT lacks a rich enough input languages to tell when input is garbage and when it isn’t. This is why EDT is a terrible decision theory.
I think there are a couple of confusions this sentence highlights.
First, there are approaches to solving decision theory problems that don’t use causal models. Part of what has made this conversation challenging is that there are several different ways to represent the world- and so even if CDT is the best / natural one, it needs to be distinguished from other approaches. EDT is not CDT in disguise; the two are distinct formulas / approaches.
Second, there are good reasons to modularize the components of the decision theory, so that you can treat learning a model from data separately from making a decision given a model. An algorithm to turn models into decisions should be able to operate on an arbitrary model, where it sees a → b → c as isomorphic to Drunk → Fall → Death.
To tell an anecdote, when my decision analysis professor would teach that subject to petroleum engineers, he quickly learned not to use petroleum examples. Say something like “suppose the probability of striking oil by drilling a well here is 40%” and an engineer’s hand will shoot up, asking “what kind of rock is it?”. The kind of rock is useful for determining whether or not the probability is 40% or something else, but the question totally misses the point of what the professor is trying to teach. The primary example he uses is choosing a location for a party subject to the uncertainty of the weather.
I’m not sure how to interpret this sentence.
The way EDT operates is to perform the following three steps for each possible action in turn:
Assume that I saw myself doing X.
Perform a Bayesian update on this new evidence.
Calculate and record my utility.
It then chooses the possible action which had the highest calculated utility.
One interpretation is you saying that EDT doesn’t make sense, but I’m not sure I agree with what seems to be the stated reason. It looks to me like you’re saying “it doesn’t make sense to assume that you do X until you know what you decide!”, when I think that does make sense, but the problem is using that assumption as Bayesian evidence as if it were an observation.
Ideal Bayesian updates assume logical omniscience, right? Including knowledge about logical fact of what EDT would do for any given input. If you know that you are an EDT agent, and condition on all of your past observations and also on the fact that you do X, but X is not in fact what EDT does given those inputs, then as an ideal Bayesian you will know that you’re conditioning on something impossible. More generally, what update you perform in step 2 depends on EDT’s input-output map, thus making the definition circular.
So, is EDT really underspecified? Or are you supposed to search for a fixed point of the circular definition, if there is one? Or does it use some method other than Bayes for the hypothetical update? Or does an EDT agent really break if it ever finds out its own decision algorithm? Or did I totally misunderstand?
Note that step 1 is “Assume that I saw myself doing X,” not “Assume that EDT outputs X as the optimal action.” I believe that excludes any contradictions along those lines. Does logical omniscience preclude imagining counterfactual worlds?
If I already know “I am EDT”, then “I saw myself doing X” does imply “EDT outputs X as the optimal action”. Logical omniscience doesn’t preclude imagining counterfactual worlds, but imagining counterfactual worlds is a different operation than performing Bayesian updates. CDT constructs counterfactuals by severing some of the edges in its causal graph and then assuming certain values for the nodes that no longer have any causes. TDT does too, except with a different graph and a different choice of edges to sever.
I don’t know how I can fail to communicate so consistently.
Yes, you can technically apply “EDT” to any causal model or (more generally) joint probability distribution containing a “EDT agent decision” node. But in practice this freedom is useless, because to derive an accurate model you generally need to take account of a) the fact that the agent is using EDT and b) any observations the agent does or does not make. To be clear, the input EDT requires is a probabilistic model describing the EDT agent’s situation (not describing historical data of “similar” situations).
There are people here trying to argue against EDT by taking a model describing historical data (such as people following dumb decision theories jumping into volcanoes) and feeding this model directly into EDT. Which is simply wrong. A model that describes the historical behaviour of agents using some other decision theory does not in general accurately describe an EDT agent in the same situation.
The fact that this egregious mistake looks perfectly normal is an artifact of the fact that CDT doesn’t care about causal parents of the “CDT decision” node.
I suspect it’s because what you are referring to as “EDT” is not what experts in the field use that technical term to mean.
nsheppard-EDT is, as far as I can tell, the second half of CDT. Take a causal model and use the do() operator to create the manipulated subgraph that would result taking possible action (as an intervention). Determine the joint probability distribution from the manipulated subgraph. Condition on observing that action with the joint probability distribution, and calculate the probabilistically-weighted mean utility of the possible outcomes. This is isomorphic to CDT, and so referring to it as EDT leads to confusion.
Whatever. I give up.
Here’s a modified version. Instead of a smoking lesion, there’s a “jump into active volcano lesion”. Furthermore, the correlation isn’t as puny as for the smoking lesion. 100% of people with this lesion jump into active volcanoes and die, and nobody else does.
Should you go jump into an active volcano?
Using a decision theory to figure out what decision you should make assumes that you’re capable of making a decision. “The lesion causes you to jump into an active volcano/smoke” and “you can choose whether to jump into an active volcano/smoke” are contradictory. Even “the lesion is correlated (at less than 100%) with jumping into an active volcano/smoking” and “you can choose whether to jump into an active volcano/smoke” are contradictory unless “is correlated with” involves some correlation for people who don’t use decision theory and no correlation for people who do.
Agreed.
Doesn’t this seem sort of realistic, actually? Decisions made with System 1 and System 2, to use Kahneman’s language, might have entirely different underlying algorithms. (There is some philosophical trouble about how far we can push the idea of an ‘intervention’, but I think for human-scale decisions there is a meaningful difference between interventions and observations such that CDT distinguishing between them is a feature.)
This maps onto an objection by proponents of EDT that the observational data might not be from people using EDT, and thus the correlation may disappear when EDT comes onto the stage. I think that objection proves too much- suppose all of our observational data on the health effects of jumping off cliffs comes from subjects who were not using EDT (suppose they were drunk). I don’t see a reason inside the decision theory for differentiating between the effects of EDT on the correlation between jumping off the cliff and the effects of EDT on the correlation between smoking and having the lesion.
These two situations correspond to two different causal structures—Drunk → Fall → Death and Smoke ← Lesion → Cancer—which could have the same joint probability distribution. The directionality of the arrow is something that CDT can make use of to tell that the two situations will respond differently to interventions at Drunk and Smoke: it is dangerous to be drunk around cliffs, but not to smoke (in this hypothetical world).
EDT cannot make use of those arrows. It just has Drunk—Fall—Death and Smoke—Lesion—Cancer (where it knows that the correlations between Drunk and Death are mediated by Fall, and the correlations between Smoke and Cancer are mediated by Lesion). If we suppose that adding an EDT node might mean that the correlation between Smoke and Lesion (and thus Cancer) might be mediated by EDT, then we must also suppose that adding an EDT node might mean that the correlation between Drunk and Fall (and thus Death) might be mediated by EDT.
(I should point out that the EDT node describes whether or not EDT was used to decide to drink, not to decide whether or not to fall off the cliff, by analogy of using EDT to decide whether or not to smoke, rather than deciding whether or not to have a lesion.)