So sometimes we can draw <-> simply to denote a conditional independence model that resembles those you get from a DAG with unobserved variables …. except Nature is annoying and doesn’t actually have any underlying DAG.
If you are confused by this, you are in good company! I am still thinking very hard about what this means.
Strangely enough, I’m not confused by it, as until someone reduces quantum mechanics to some lower-level non-quantum physics (which, apparently is something a few people are actually working on), I’ve just gone and accepted that the real causative agent in Nature is a joint probability distribution that is allowed to set a whole tuple of nonlocal outcome variables as it evolves.
But anyway, yes, this means that’s roughly the kind of “correlation arrow” I think should be drawn in a CDT causal graph to handle Newcomblike problems, with CDT being just very slightly modified to actually make use of those correlative arrows in setting its decision.
That would get us at least as far as CDT+E does, while also reducing the problem of discovering the “entanglements” to actually just learning correct beliefs about correlative arrows, hidden variables or no hidden variables.
I would again like to hear what’s going on in the Counterfactual Mugging, as that looks like the first situation we cannot actually beat by learning correct causative and correlative beliefs, and then applying a proper “Causal and Correlative” Decision Theory.
Anyway, sometime this evening or something I’m going to watch your lecture, and email you for the slides.
Strangely enough, I’m not confused by it, as until someone reduces quantum mechanics to some lower-level non-quantum physics (which, apparently is something a few people are actually working on), I’ve just gone and accepted that the real causative agent in Nature is a joint probability distribution that is allowed to set a whole tuple of nonlocal outcome variables as it evolves.
But anyway, yes, this means that’s roughly the kind of “correlation arrow” I think should be drawn in a CDT causal graph to handle Newcomblike problems, with CDT being just very slightly modified to actually make use of those correlative arrows in setting its decision.
That would get us at least as far as CDT+E does, while also reducing the problem of discovering the “entanglements” to actually just learning correct beliefs about correlative arrows, hidden variables or no hidden variables.
I would again like to hear what’s going on in the Counterfactual Mugging, as that looks like the first situation we cannot actually beat by learning correct causative and correlative beliefs, and then applying a proper “Causal and Correlative” Decision Theory.
Anyway, sometime this evening or something I’m going to watch your lecture, and email you for the slides.