In the real world, this is correct, but it is not mathematically necessary.
If you take physical causality out of the picture, then the arches orientation is underspecified in the general case. But then, since you are only allowed to cut arches that are incoming to the decision nodes, your decision model will be underspecified.
It is. How else can Omega be a perfect predictor?
If you are going to allow time travel, defined in a broad sense, then your casual network will have cycles.
The problem is that you can’t put any meaning into the direction of the arrows because they’re arbitrary.
But the point is that in EDT you don’t care about the direction of the arrows.
If you give me a causal diagram and the embedded probabilities for the environment, and ask me to predict what would happen if you did action A (i.e. counterfactual reasoning), you’ve already given me all I need to calculate the probabilities of any of the other nodes you might be interested in, for any action included in the environment description.
If I give you a casual diagram for Newcomb’s problem (or some variation of thereof) you will make a wrong prediction, because causal diagrams can’t properly represent it.
If you give me a joint probability distribution for the environment, and ask me to predict what would happen if you did action A, I don’t have enough information to calculate the probabilities of the other nodes.
If the model includes the myself as well as the environment, you will be able to make the correct prediction.
Of course, if you give this prediction back to me, and it influences my decision, then the model has to include you as well. Which may, in principle, cause Godelian self-reference issues. But that’s a fundamental limit of the logic capabilities of any computable system, there are no easy ways around it. But that’s not as bad as it sounds: the fact that you can’t precisely predict everything about yourself doesn’t mean that you can’t predict anything or that you can’t make approximate predictions. (for instance, GCC can compile and optimize GCC)
Causal decision models are one way to approximate hard decision problems, and they work well in many practical cases. Newcomb-like scenarios are specifically designed to make them fail.
But the point is that in EDT you don’t care about the direction of the arrows.
Yes, and because EDT does not assign meaning to the direction of the arrows is why it’s a less powerful language for describing environments.
If I give you a casual diagram for Newcomb’s problem (or some variation of thereof) you will make a wrong prediction, because causal diagrams can’t properly represent it.
If you allow retrocausation, I don’t see why you think this is the case.
I’m not sure what we are disagreeing about. In CDT you need causal Bayesian networks where the arrow orientation reflects physical causality. In EDT you just need probability distributions. You can represent them as Bayesian networks, but in this case arrow direction doesn’t matter, up to certain consistency constraints.
Why would EDT not having causal arrows be a problem?
Disagree. The directionality of causation appears to be a consequence of the Second Law of Thermodynamics, which is not a fundamental law.
All the microscopic laws are completely compatible with there being a region of space-time more or less like ours, but in reverse, with entropy decreasing monotonically. In fact, in a sufficiently large world, such a region is to be expected, since the Second Law is probabilistic. In this region, matches will light before (from our perspective) they are struck, and ripples in a pond will coalesce to a single point and eject a rock from the pond. If we use nodes similar to the ones we do in our environment, then in order to preserve the Causal Markov Condition, we would have to draw arrows in the opposite temporal direction.
Causation is not a useful concept when we’re talking about the fundamental level of nature, precisely because all fundamental interactions (with some very obscure exceptions) are completely time-symmetric. Causation (and the whole DAG framework) becomes useful when we move to the macroscopic world of temporally asymmetric phenomena. And the temporal asymmetry is just a manifestation of the Second Law.
Causation is not a useful concept when we’re talking about the fundamental level of nature, precisely because all fundamental interactions (with some very obscure exceptions) are completely time-symmetric.
Assuming CPT symmetry, the very reason why there’s still matter in the universe (as opposed to it all having annihilated with antimatter) in the first place must be one of those very obscure exceptions.
It’s true that CP-violations appear to be a necessary condition for the baryon asymmetry (if you make certain natural-seeming assumptions). It’s another question whether the observed CP-violations are sufficient for the asymmetry, if the other Sakharov conditions are met. And one of the open problems in contemporary cosmology is precisely that they don’t appear to be sufficient, that the subtle CP-violations we have observed so far (only in four types of mesons) are too subtle to account for the huge asymmetry between matter and anti-matter. They would only account for a tiny amount of that asymmetry. So, yeah, the actual violations of T-symmetry we see are in fact obscure exceptions. They are not sufficient to account for either the pervasive time asymmetry of macroscopic phenomena or the pervasive baryon asymmetry at the microscopic level. There are two ways to go from here: either there must be much more significant CP-violations that we haven’t yet been able to observe, or the whole Sakharov approach of accounting for the baryon asymmetry dynamically is wrong, and we have to turn to another kind of explanation (anthropic, maybe?). The latter option is what we have settled on when it comes to time asymmetry—we have realized that a fundamental single-universe dynamical explanation for the Second Law is not on the cards—and it may well turn out to be the right option for the baryon asymmetry as well.
It’s also worth noting that CP-violations by themselves would be insufficient to account for the asymmetry, even if they were less obscure than they appear to be. You also need the Second Law of Thermodynamics (this is the third Sakharov condition). In thermodynamic equilibrium any imbalance between matter and anti-matter generated by CP-violating interactions would be undone.
In any case, even if it turns out that CP-violating interactions are plentiful enough to account for the baryon asymmetry, they still could not possibly account for macroscopic temporal asymmetry. The particular sort of temporal asymmetry we see in the macroscopic world involves the disappearance of macroscopically available information. Microscopic CP-violations are information-preserving (they are CPT symmetric), so they cannot account for this type of asymmetry. If there is going to be a fundamental explanation for the arrow of time it would have to involve laws that don’t preserve information. The only serious candidate for this so far is (real, not instrumental) wavefunction collapse, and we all know how that theory is regarded around these parts.
I should make clear that by ‘fundamental’ I was not speaking in terms of physics, but in terms of decision theory, where causation does seem to be of central importance.
If we use nodes similar to the ones we do in our environment, then in order to preserve the Causal Markov Condition, we would have to draw arrows in the opposite temporal direction.
This reads to me like “conditioning on us being in a weird part of the universe where less likely events are more likely, then when we apply the assumption that we’re in a normal part of the universe where more likely events are more likely we get weird results.” And, yes, I agree with that reading, and I’m not sure what you want that to imply.
I wanted to imply that the temporal directionality of causation is a consequence of the Second Law of Thermodynamics. I guess the point would be that the “less likely” and “more likely” in your gloss are only correct if you restrict yourself to a macroscopic level of description. Described microscopically, both regions are equally likely, according to standard statistical mechanics. This is related to the idea that non-fundamental macroscopic factors make a difference when it comes to the direction of causal influence.
But yeah, this was based on misreading your use of “fundamental” as referring to physical fundamentality. If you meant decision-theoretically fundamental, then I agree with you. I thought you were espousing the Yudkowsky-esque line that causal relations are part of the fundamental furniture of the universe and that the Causal Markov Condition is deeper and more fundamental than the Second Law of Thermodynamics.
If you take physical causality out of the picture, then the arches orientation is underspecified in the general case. But then, since you are only allowed to cut arches that are incoming to the decision nodes, your decision model will be underspecified.
If you are going to allow time travel, defined in a broad sense, then your casual network will have cycles.
But the point is that in EDT you don’t care about the direction of the arrows.
If I give you a casual diagram for Newcomb’s problem (or some variation of thereof) you will make a wrong prediction, because causal diagrams can’t properly represent it.
If the model includes the myself as well as the environment, you will be able to make the correct prediction.
Of course, if you give this prediction back to me, and it influences my decision, then the model has to include you as well. Which may, in principle, cause Godelian self-reference issues. But that’s a fundamental limit of the logic capabilities of any computable system, there are no easy ways around it.
But that’s not as bad as it sounds: the fact that you can’t precisely predict everything about yourself doesn’t mean that you can’t predict anything or that you can’t make approximate predictions.
(for instance, GCC can compile and optimize GCC)
Causal decision models are one way to approximate hard decision problems, and they work well in many practical cases. Newcomb-like scenarios are specifically designed to make them fail.
Yes, and because EDT does not assign meaning to the direction of the arrows is why it’s a less powerful language for describing environments.
If you allow retrocausation, I don’t see why you think this is the case.
I’m not convinced that this is the case.
Arrow orientation is an artifact of Bayesian networks, not a funamental property of the world.
! Causation going in one direction (if the nodes are properly defined) does appear to be a fundamental property of the real world.
I’m not sure what we are disagreeing about.
In CDT you need causal Bayesian networks where the arrow orientation reflects physical causality.
In EDT you just need probability distributions. You can represent them as Bayesian networks, but in this case arrow direction doesn’t matter, up to certain consistency constraints.
Why would EDT not having causal arrows be a problem?
Because the point of making decisions is to cause things to happen, and so encoding information about causality is a good idea.
Disagree. The directionality of causation appears to be a consequence of the Second Law of Thermodynamics, which is not a fundamental law.
All the microscopic laws are completely compatible with there being a region of space-time more or less like ours, but in reverse, with entropy decreasing monotonically. In fact, in a sufficiently large world, such a region is to be expected, since the Second Law is probabilistic. In this region, matches will light before (from our perspective) they are struck, and ripples in a pond will coalesce to a single point and eject a rock from the pond. If we use nodes similar to the ones we do in our environment, then in order to preserve the Causal Markov Condition, we would have to draw arrows in the opposite temporal direction.
Causation is not a useful concept when we’re talking about the fundamental level of nature, precisely because all fundamental interactions (with some very obscure exceptions) are completely time-symmetric. Causation (and the whole DAG framework) becomes useful when we move to the macroscopic world of temporally asymmetric phenomena. And the temporal asymmetry is just a manifestation of the Second Law.
Assuming CPT symmetry, the very reason why there’s still matter in the universe (as opposed to it all having annihilated with antimatter) in the first place must be one of those very obscure exceptions.
It’s true that CP-violations appear to be a necessary condition for the baryon asymmetry (if you make certain natural-seeming assumptions). It’s another question whether the observed CP-violations are sufficient for the asymmetry, if the other Sakharov conditions are met. And one of the open problems in contemporary cosmology is precisely that they don’t appear to be sufficient, that the subtle CP-violations we have observed so far (only in four types of mesons) are too subtle to account for the huge asymmetry between matter and anti-matter. They would only account for a tiny amount of that asymmetry. So, yeah, the actual violations of T-symmetry we see are in fact obscure exceptions. They are not sufficient to account for either the pervasive time asymmetry of macroscopic phenomena or the pervasive baryon asymmetry at the microscopic level. There are two ways to go from here: either there must be much more significant CP-violations that we haven’t yet been able to observe, or the whole Sakharov approach of accounting for the baryon asymmetry dynamically is wrong, and we have to turn to another kind of explanation (anthropic, maybe?). The latter option is what we have settled on when it comes to time asymmetry—we have realized that a fundamental single-universe dynamical explanation for the Second Law is not on the cards—and it may well turn out to be the right option for the baryon asymmetry as well.
It’s also worth noting that CP-violations by themselves would be insufficient to account for the asymmetry, even if they were less obscure than they appear to be. You also need the Second Law of Thermodynamics (this is the third Sakharov condition). In thermodynamic equilibrium any imbalance between matter and anti-matter generated by CP-violating interactions would be undone.
In any case, even if it turns out that CP-violating interactions are plentiful enough to account for the baryon asymmetry, they still could not possibly account for macroscopic temporal asymmetry. The particular sort of temporal asymmetry we see in the macroscopic world involves the disappearance of macroscopically available information. Microscopic CP-violations are information-preserving (they are CPT symmetric), so they cannot account for this type of asymmetry. If there is going to be a fundamental explanation for the arrow of time it would have to involve laws that don’t preserve information. The only serious candidate for this so far is (real, not instrumental) wavefunction collapse, and we all know how that theory is regarded around these parts.
I should make clear that by ‘fundamental’ I was not speaking in terms of physics, but in terms of decision theory, where causation does seem to be of central importance.
This reads to me like “conditioning on us being in a weird part of the universe where less likely events are more likely, then when we apply the assumption that we’re in a normal part of the universe where more likely events are more likely we get weird results.” And, yes, I agree with that reading, and I’m not sure what you want that to imply.
I wanted to imply that the temporal directionality of causation is a consequence of the Second Law of Thermodynamics. I guess the point would be that the “less likely” and “more likely” in your gloss are only correct if you restrict yourself to a macroscopic level of description. Described microscopically, both regions are equally likely, according to standard statistical mechanics. This is related to the idea that non-fundamental macroscopic factors make a difference when it comes to the direction of causal influence.
But yeah, this was based on misreading your use of “fundamental” as referring to physical fundamentality. If you meant decision-theoretically fundamental, then I agree with you. I thought you were espousing the Yudkowsky-esque line that causal relations are part of the fundamental furniture of the universe and that the Causal Markov Condition is deeper and more fundamental than the Second Law of Thermodynamics.