First, I don’t mind the new format as long as there is some equivalent written reference I can go to. The same way the embedded agency sequence has the full written document and the fun diagrams. This is to make it easier to reference individual components of the material for later discussion. On reddit, I find it’s far more difficult to have a discussion about specific points in video content because it requires me to transcribe the section I want to talk about in order to quote it properly.
Second, I might have missed this, but is there a reason we’re limiting ourselves to abstract causal models? I get that they’re useful for answering queries with the do() operator, but there are many situations where it doesn’t make sense to model the system as a DAG.
Causal models are a well-characterized, self-contained model class. We know what all the relevant queries are. At the same time, they apply to a huge variety of real-world systems, at multiple levels of abstraction, and (with symmetry) even provide a Turing-equivalent model of computation.
Built-in counterfactuals mean we don’t need a bunch of extra infrastructure to apply results to decision theory. It’s hard to imagine a theory of agency without some kind of counterfactuals in it (since off-equilibrium behavior matters for game theory), and causal models are the simplest model class with built-in support for counterfactuals.
Combining the previous two bullets: I expect that causal models are a relatively well-characterized model class which is nonetheless likely to exhibit most of the key qualitative properties which we need to figure out for embedded agency.
Finally, my intuition is that causal models (with simple function-nodes) tend to naturally encourage avoiding black boxes, in a way that e.g. logic or Turing machines do not. They make it natural to think about computations rather than functions. That, in turn, will hopefully provide a built-in line of defense against various diagonalization problems.
I don’t mind the new format as long as there is some equivalent written reference I can go to.
I’m still undecided on how to handle this. The problem with e.g. a transcription is that I’m largely talking about the diagrams, pointing at them, drawing on them, etc; that’s a big part of why it feels easier to communicate this stuff via video in the first place. Maybe labeling the visuals would help? Not sure. I’m definitely open to suggestions on that front.
Two points.
First, I don’t mind the new format as long as there is some equivalent written reference I can go to. The same way the embedded agency sequence has the full written document and the fun diagrams. This is to make it easier to reference individual components of the material for later discussion. On reddit, I find it’s far more difficult to have a discussion about specific points in video content because it requires me to transcribe the section I want to talk about in order to quote it properly.
Second, I might have missed this, but is there a reason we’re limiting ourselves to abstract causal models? I get that they’re useful for answering queries with the do() operator, but there are many situations where it doesn’t make sense to model the system as a DAG.
Great question. I considered addressing that in the intro video, but decided to keep the “why this topic?” question separate.
I talk about this a fair bit in Embedded Agency via Abstraction. Major reasons for the choice:
Causal models are a well-characterized, self-contained model class. We know what all the relevant queries are. At the same time, they apply to a huge variety of real-world systems, at multiple levels of abstraction, and (with symmetry) even provide a Turing-equivalent model of computation.
Built-in counterfactuals mean we don’t need a bunch of extra infrastructure to apply results to decision theory. It’s hard to imagine a theory of agency without some kind of counterfactuals in it (since off-equilibrium behavior matters for game theory), and causal models are the simplest model class with built-in support for counterfactuals.
Combining the previous two bullets: I expect that causal models are a relatively well-characterized model class which is nonetheless likely to exhibit most of the key qualitative properties which we need to figure out for embedded agency.
Finally, my intuition is that causal models (with simple function-nodes) tend to naturally encourage avoiding black boxes, in a way that e.g. logic or Turing machines do not. They make it natural to think about computations rather than functions. That, in turn, will hopefully provide a built-in line of defense against various diagonalization problems.
I’m still undecided on how to handle this. The problem with e.g. a transcription is that I’m largely talking about the diagrams, pointing at them, drawing on them, etc; that’s a big part of why it feels easier to communicate this stuff via video in the first place. Maybe labeling the visuals would help? Not sure. I’m definitely open to suggestions on that front.