So if we can describe a heuristic that gives us the same answer as conditioning on all of Z, then an EDT agent will want to use it.
This is wrong or at least badly incomplete. I don’t think it matters to the main point of this post (that EDT does “normal-looking causal inference” in normal cases). But it’s pretty central for the actual live philosophical debates about EDT v CDT v TDT.
In particular, it’s true that we’d like to condition on all of Z, but if we lack introspect access to parts of Z then this procedure won’t do that. It ignores effects via Z but doesn’t actually know the values in Z, so there’s no real reason to ignore those effects. Actually handling this issue is very subtle and has been discussed a lot. I think it’s fine if you use any algorithm A that conditions on A() = X, but in general it’s very messy to talk about algorithms that take facts as inputs without knowing those facts.
This is wrong or at least badly incomplete. I don’t think it matters to the main point of this post (that EDT does “normal-looking causal inference” in normal cases). But it’s pretty central for the actual live philosophical debates about EDT v CDT v TDT.
In particular, it’s true that we’d like to condition on all of Z, but if we lack introspect access to parts of Z then this procedure won’t do that. It ignores effects via Z but doesn’t actually know the values in Z, so there’s no real reason to ignore those effects. Actually handling this issue is very subtle and has been discussed a lot. I think it’s fine if you use any algorithm A that conditions on A() = X, but in general it’s very messy to talk about algorithms that take facts as inputs without knowing those facts.