This post is a mixture of two questions: “interventions” from an agent which is part of the world, and restrictions
The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl’s answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
The second is about limiting allowed interventions. This looks like a special case of normality conditions, which are described in Chapter 3 of Halpern’s book. Halpern’s treatment of normality conditions actually involves a normality ordering on worlds, though this can easily be massaged to imply a normality ordering on possible interventions. I don’t see any special mileage here out of making the normality ordering dependent on complexity, as opposed to any other arbitrary normality ordering, though someone may be able to find some interesting interaction between normality and complexity.
Speaking more broadly, this is part of the broader problem that our current definitions of actual causation are extremely model-sensitive, which I find a serious problem. I don’t see a mechanistic resolution, but I did find this essay extremely thought provoking, which posits considering interventions in all possible containing models: http://strevens.org/research/expln/MacRules.pdf
Pearl’s answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
This is actually a good illustration of what I mean. You can’t shield an experiment from outside influence entirely, not even in principle, because its you doing the shielding, and your activity is caused by the rest of the world. If you decide to only look at a part of the world, one that doesn’t contain you, thats not a problem—but thats just assuming that that route of influence doesn’t matter. Similarly, “knowledge about repeatability” is causal knowledge. This answer just tells you how to gain causal knowledge of parts of the world, given that you already have some causal knowledge about the whole. So you can’t apply it to the entire world. This is why I say it doesn’t go well with embedded agency.
The second is about limiting allowed interventions.
No? What I’m limiting is what dependencies we’re considering. And it seems that what you say after this is about singular causality, and I’m not really concerned with that. Having a causal web is sufficient for decision theory.
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It’s very bad at getting causal knowledge from nothing. This has long been known.
For the first: Well, yep, that’s why I said I was only 80% satisfied.
For the second: I think you’ll need to give a concrete example, with edges, probabilities, and functions. I’m not seeing how to apply thinking about complexity to a type causality setting, where it’s assumed you have actual probabilities on co-occurrences.
This post is a mixture of two questions: “interventions” from an agent which is part of the world, and restrictions
The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl’s answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
The second is about limiting allowed interventions. This looks like a special case of normality conditions, which are described in Chapter 3 of Halpern’s book. Halpern’s treatment of normality conditions actually involves a normality ordering on worlds, though this can easily be massaged to imply a normality ordering on possible interventions. I don’t see any special mileage here out of making the normality ordering dependent on complexity, as opposed to any other arbitrary normality ordering, though someone may be able to find some interesting interaction between normality and complexity.
Speaking more broadly, this is part of the broader problem that our current definitions of actual causation are extremely model-sensitive, which I find a serious problem. I don’t see a mechanistic resolution, but I did find this essay extremely thought provoking, which posits considering interventions in all possible containing models: http://strevens.org/research/expln/MacRules.pdf
This is actually a good illustration of what I mean. You can’t shield an experiment from outside influence entirely, not even in principle, because its you doing the shielding, and your activity is caused by the rest of the world. If you decide to only look at a part of the world, one that doesn’t contain you, thats not a problem—but thats just assuming that that route of influence doesn’t matter. Similarly, “knowledge about repeatability” is causal knowledge. This answer just tells you how to gain causal knowledge of parts of the world, given that you already have some causal knowledge about the whole. So you can’t apply it to the entire world. This is why I say it doesn’t go well with embedded agency.
No? What I’m limiting is what dependencies we’re considering. And it seems that what you say after this is about singular causality, and I’m not really concerned with that. Having a causal web is sufficient for decision theory.
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It’s very bad at getting causal knowledge from nothing. This has long been known.
For the first: Well, yep, that’s why I said I was only 80% satisfied.
For the second: I think you’ll need to give a concrete example, with edges, probabilities, and functions. I’m not seeing how to apply thinking about complexity to a type causality setting, where it’s assumed you have actual probabilities on co-occurrences.