Typically, no. “Acausal trade” usually refers to a different mechanism: “I do this thing for you if you do this other thing for me.” Discussions of acausal trade often involve the agents attempting to simulate each other. In contrast, ECL flows through direct correlation: “If I do this, I learn that you are more likely to also do this.” For more, see Christiano (2022)’s discussion of correlation versus reciprocity and Oesterheld, 2017, section 6.1.
I’m skeptical about the extent to which these are actually different things. Oesterheld says “superrationality may be seen as a special case of acausal trade in which the agents’ knowledge implies the correlation directly, thus avoiding the need for explicit mutual modeling and the complications associated with it”. So at the very least, we can think of one as a subset of the other (though I think I’d actually classify it the other way round, with acausal trade being a special case of superrationality).
But it’s not just that. Consider an ECL model that concludes: “my decision is correlated with X’s decision, therefore I should cooperate”. But this conclusion also requires complicated recursive reasoning—specifically, reasons for thinking that the correlation holds even given that you’re taking the correlation into account when making your decision.
(E.g. suppose that you know that you were similar to X, except that you are doing ECL and X isn’t. But then ECL might break the previous correlation between you and X. So actually the ECL process needs to reason “the outcome of the decision process I’m currently doing is correlated with the outcome of the decision process that they’re doing”, and I think realistically finding a fixed point probably wouldn’t look that different from standard descriptions of acausal trade.)
This may be another example of the phenomenon Paul describes in his post on why EDT > CDT: although EDT is technically more correct, in practice you need to do something like CDT to reason robustly. (In this case ECL is EDT and acausal trade is more CDTish.)
I’m not sure I understand exactly what you’re saying, so I’m just gonna write some vaguely related things to classic acausal trade + ECL:
I’m actually really confused about the exact relationship between “classic” prediction-based acausal trade and ECL. And I think I tend to think about them as less crisply different than others. I’ve tried to unconfuse myself about that for a few hours some months ago and just ended up with a mess of a document. Some intuitive way to differentiate them:
ECL leverages the correlation between you and the other agent “directly.”
“Classic” prediction-based acausal trade leverages the correlation between you and the other agent’s prediction of you. (Which, intuitively, they are less in control of than their decision-making.
--> This doesn’t look like a fundamental difference between the mechanisms (and maybe there are in-betweeners? But I don’t know of any set-ups) but like...it makes a difference in practice or something?
On the recursion question:
I agree that ECL has this whole “I cooperate if I think that makes it more likely that they cooperate”, so there’s definitely also some prediction flavoured thing going on and often, the deliberation about whether they’ll be more likely to cooperate when you do will include “they think that I’m more likely to cooperate if they cooperate”. So it’s kind of recursive.
Note that ECL at least doesn’t strictly require that. You can in principle do ECL with rocks “My world model says that conditioning on me taking action X, the likelihood of this rock falling down is higher than if I condition on taking action Y.” Tbc, if action X isn’t “throw the rock” or something similar, that’s a pretty weird world model. You probably can’t do “classic” acausal trade with rocks?
Some more not well-in-order not thought-out somewhat incoherent thinking-out-loud random thoughts and intuitions:
More random and less coherent: Something something about how when you think of an agent using some meta-policy to answer the question “What object-level policy should I follow?”, there’s some intuitive sense in which ECL is recursive in the meta-policy while “classic” acausal trade is recursive in the object-level policy. I’m highly skeptical of this meta-policy object-level policy thing making sense though and also not confident in what I said about which type of trade is recursive in what.
Another intuitive difference is that with classic acausal trade, you usually want to verify whether the other agent is cooperating. In ECL you don’t. Also, something something about how it’s great to learn a lot about your trade partner for classic acausal trade and it’s bad for ECL? (I suspect that there’s nothing actually weird going on here and that this is because it’s about learning different kinds of things. But I haven’t thought about it enough to articulate the difference confidently and clearly.)
The concept of commitment race doesn’t seem to make much sense when thinking just about ECL and maybe nailing down where the difference comes from is interesting?
I’m skeptical about the extent to which these are actually different things. Oesterheld says “superrationality may be seen as a special case of acausal trade in which the agents’ knowledge implies the correlation directly, thus avoiding the need for explicit mutual modeling and the complications associated with it”. So at the very least, we can think of one as a subset of the other (though I think I’d actually classify it the other way round, with acausal trade being a special case of superrationality).
But it’s not just that. Consider an ECL model that concludes: “my decision is correlated with X’s decision, therefore I should cooperate”. But this conclusion also requires complicated recursive reasoning—specifically, reasons for thinking that the correlation holds even given that you’re taking the correlation into account when making your decision.
(E.g. suppose that you know that you were similar to X, except that you are doing ECL and X isn’t. But then ECL might break the previous correlation between you and X. So actually the ECL process needs to reason “the outcome of the decision process I’m currently doing is correlated with the outcome of the decision process that they’re doing”, and I think realistically finding a fixed point probably wouldn’t look that different from standard descriptions of acausal trade.)
This may be another example of the phenomenon Paul describes in his post on why EDT > CDT: although EDT is technically more correct, in practice you need to do something like CDT to reason robustly. (In this case ECL is EDT and acausal trade is more CDTish.)
I’m not sure I understand exactly what you’re saying, so I’m just gonna write some vaguely related things to classic acausal trade + ECL:
I’m actually really confused about the exact relationship between “classic” prediction-based acausal trade and ECL. And I think I tend to think about them as less crisply different than others. I’ve tried to unconfuse myself about that for a few hours some months ago and just ended up with a mess of a document. Some intuitive way to differentiate them:
ECL leverages the correlation between you and the other agent “directly.”
“Classic” prediction-based acausal trade leverages the correlation between you and the other agent’s prediction of you. (Which, intuitively, they are less in control of than their decision-making.
--> This doesn’t look like a fundamental difference between the mechanisms (and maybe there are in-betweeners? But I don’t know of any set-ups) but like...it makes a difference in practice or something?
On the recursion question:
I agree that ECL has this whole “I cooperate if I think that makes it more likely that they cooperate”, so there’s definitely also some prediction flavoured thing going on and often, the deliberation about whether they’ll be more likely to cooperate when you do will include “they think that I’m more likely to cooperate if they cooperate”. So it’s kind of recursive.
Note that ECL at least doesn’t strictly require that. You can in principle do ECL with rocks “My world model says that conditioning on me taking action X, the likelihood of this rock falling down is higher than if I condition on taking action Y.” Tbc, if action X isn’t “throw the rock” or something similar, that’s a pretty weird world model. You probably can’t do “classic” acausal trade with rocks?
Some more not well-in-order not thought-out somewhat incoherent thinking-out-loud random thoughts and intuitions:
More random and less coherent: Something something about how when you think of an agent using some meta-policy to answer the question “What object-level policy should I follow?”, there’s some intuitive sense in which ECL is recursive in the meta-policy while “classic” acausal trade is recursive in the object-level policy. I’m highly skeptical of this meta-policy object-level policy thing making sense though and also not confident in what I said about which type of trade is recursive in what.
Another intuitive difference is that with classic acausal trade, you usually want to verify whether the other agent is cooperating. In ECL you don’t. Also, something something about how it’s great to learn a lot about your trade partner for classic acausal trade and it’s bad for ECL? (I suspect that there’s nothing actually weird going on here and that this is because it’s about learning different kinds of things. But I haven’t thought about it enough to articulate the difference confidently and clearly.)
The concept of commitment race doesn’t seem to make much sense when thinking just about ECL and maybe nailing down where the difference comes from is interesting?