I don’t know enough math to understand whether you’ve covered this in your examples, but here’s my intuition in the form of typing without a lot of reflection or editing okay disclaimer over:
If we have two variables, A and C, and we’re considering A, C, and (A xor C), it sounds to me like we’ve privileged things arbitrarily in some sense… relabeling them A, B, and C it’s clear that we could have pivoted to consider any two of them the “base” variables and the third the “xor’d” variable, so there should be no preferred counterfactual. It’s a loopy cause, a causal diagram that’s not a DAG. Which doesn’t show up IRL. Like going back in time to kill grandpa.
But we often pretend they occur by abstracting time and saying steady-state is a thing (or steady-states, and we’re looking at the map of transitions) and then we get loops and start studying feedback and whatnot. But if you unpacked any of those loops you’d get a very-very-repetitive DAG that looks a lot like the initial diagram copied over and over with one-way arrows from copy to copy.
Seems like there are three options to deal with {A,B,C}. They are isomorphic to each other, so in some sense we shouldn’t be able to say which counterfactuals to use. We could:
do our modeling relative to a specified imposed ordering of all variables, which seems really hard, or
somehow calculate all possible results and average over permutations, which seems either factorially harder or much easier depending on Math!, or
assume there is hidden structure, that A, B, and C are abstractions atop a real DAG, and use a not-known-to-me mathematics of loopy causation to define something other than counterfactuals atop the variables, calling counterfactuals over A, B, C a sort of type error.
That sounds about right to me. I think people have taken stabs at looking for causality-like structure in logic, but they haven’t found anything useful.
I don’t know enough math to understand whether you’ve covered this in your examples, but here’s my intuition in the form of typing without a lot of reflection or editing okay disclaimer over:
If we have two variables, A and C, and we’re considering A, C, and (A xor C), it sounds to me like we’ve privileged things arbitrarily in some sense… relabeling them A, B, and C it’s clear that we could have pivoted to consider any two of them the “base” variables and the third the “xor’d” variable, so there should be no preferred counterfactual. It’s a loopy cause, a causal diagram that’s not a DAG. Which doesn’t show up IRL. Like going back in time to kill grandpa.
But we often pretend they occur by abstracting time and saying steady-state is a thing (or steady-states, and we’re looking at the map of transitions) and then we get loops and start studying feedback and whatnot. But if you unpacked any of those loops you’d get a very-very-repetitive DAG that looks a lot like the initial diagram copied over and over with one-way arrows from copy to copy.
Seems like there are three options to deal with {A,B,C}. They are isomorphic to each other, so in some sense we shouldn’t be able to say which counterfactuals to use. We could:
do our modeling relative to a specified imposed ordering of all variables, which seems really hard, or
somehow calculate all possible results and average over permutations, which seems either factorially harder or much easier depending on Math!, or
assume there is hidden structure, that A, B, and C are abstractions atop a real DAG, and use a not-known-to-me mathematics of loopy causation to define something other than counterfactuals atop the variables, calling counterfactuals over A, B, C a sort of type error.
I’m not following what you’re saying about loopy causation. How are you constructing this graph?
That sounds about right to me. I think people have taken stabs at looking for causality-like structure in logic, but they haven’t found anything useful.