This point might be useless, but it feels like we are substituting sub-maps for the territory here. This example looks to me like:
Circuits → Map
Physics → Sub-map
Reality → Territory
I intuitively feel like a causal signature should show up in the sub-map of whichever level you are currently examining. I am tempted to go as far as saying the degree to which the sub-map allows causal inference is effectively a measure of how close the layers are on the ladder of abstraction. In my head this sounds something like “perfect causal inference implies the minimum coherent abstraction distance.”
I do agree with the sub-maps point, and think it is relevant, although I also don’t think we currently understand abstraction well enough to figure it out.
I intuitively feel like a causal signature should show up in the sub-map of whichever level you are currently examining...
Counterexample: feedback control. In day-to-day activity, I use a model in which turning the dial on a thermostat causes a room to heat up. The underlying reality is much more complicated, with a bunch of back-and-forth causal arrows. One way to say it: the purpose of a feedback controller is to make a system behave, at the abstract level, as if it had a different causal structure.
I have a hard time thinking of that example as a different causal structure. Rather I think of it as keeping the same causal structure, but abstracting most of it away until we reach the level of the knob; then we make the knob concrete. This creates an affordance.
Of course when I am in my house I am approaching it from the knob-end, so mostly I just assume some layers of hidden detail behind it.
Another way to say this is that I tend to view it as compressing causal structure.
This point might be useless, but it feels like we are substituting sub-maps for the territory here. This example looks to me like:
Circuits → Map
Physics → Sub-map
Reality → Territory
I intuitively feel like a causal signature should show up in the sub-map of whichever level you are currently examining. I am tempted to go as far as saying the degree to which the sub-map allows causal inference is effectively a measure of how close the layers are on the ladder of abstraction. In my head this sounds something like “perfect causal inference implies the minimum coherent abstraction distance.”
I do agree with the sub-maps point, and think it is relevant, although I also don’t think we currently understand abstraction well enough to figure it out.
Counterexample: feedback control. In day-to-day activity, I use a model in which turning the dial on a thermostat causes a room to heat up. The underlying reality is much more complicated, with a bunch of back-and-forth causal arrows. One way to say it: the purpose of a feedback controller is to make a system behave, at the abstract level, as if it had a different causal structure.
I have a hard time thinking of that example as a different causal structure. Rather I think of it as keeping the same causal structure, but abstracting most of it away until we reach the level of the knob; then we make the knob concrete. This creates an affordance.
Of course when I am in my house I am approaching it from the knob-end, so mostly I just assume some layers of hidden detail behind it.
Another way to say this is that I tend to view it as compressing causal structure.