The causal incentives working group should get mentioned, it’s directly on AI safety: though it’s a bit older I gained a lot of clarity about AI safety concepts via “Modeling AGI Safety Frameworks with Causal Influence Diagrams”, which is quite accessible even if you don’t have a ton of training in causality.
David Reber
[Warning: “cyclic” overload. I think in this post it’s referring to the dynamical systems definition, i.e. variables reattain the same state later in time. I’m referring to Pearl’s causality definition: variable X is functionally dependent on variable Y, which is itself functionally dependent on variable X.]
Turns out Chaos is not Linear...
I think the bigger point (which is unaddressed here) is that chaos can’t arise for acyclic causal models (SCMs). Chaos can only arise when there is feedback between the variables right? Hence the characterization of chaos is that orbits of all periods are present in the system: you can’t have an orbit at all without functional feedback. The linear approximations post is working on an acyclic Bayes net.
I believe this sort of phenomenon [ chaos ] plays a central role in abstraction in practice: the “natural abstraction” is a summary of exactly the information which isn’t wiped out. So, my methods definitely needed to handle chaos.
Not all useful systems in the world are chaotic. And the Telephone Theorem doesn’t rely on chaos as the mechanism for information loss. So it seems too strong to say “my methods definitely need to handle chaos”. Surely there are useful footholds in between the extremes of “acyclic + linear” to “cyclic + chaos”: for instance, “cyclic + linear”.
At any rate, Foundations of Structural Causal Models with Cycles and Latent Variables could provide a good starting point for cyclic causal models (also called structural equation models). There are other formalisms as well but I’m preferential towards this because of how closely it matches Pearl.
As I understand it, the proof in the appendix only assumes we’re working with Bayes nets (so just factorizations of probability distributions). That is, no assumption is made that the graphs are causal in nature (they’re not necessarily assumed to be the causal diagrams of SCMs) although of course the arguments still port over if we make that stronger assumption.
Is that correct?
Anecdotally, I’ve found the same said of Less Wrong / Alignment Forum posts among AI safety / EA academics: that it amounts to an echo chamber that no one else reads.
I suspect both communities are taking their collective lack of familiarity with the other as evidence that the other community isn’t doing their part to disseminate their ideas properly. Of course, neither community seems particularly interested in taking the time to read up on the other, and seems to think that the other community should simply mimic their example (LWers want more LW synopses of academic papers, academics want AF work to be published in journals).
Personally I think this is symptomatic of a larger camp-ish divide between the two, which is worth trying to bridge.