(Side note: There’s an aspect to the notion of “causal counterfactual” that I think it’s worth distinguishing from what’s discussed here. This post seems to take causal counterfactuals to be a description of top-level decision reasoning. A different meaning is that causal counterfactuals refer to an aspiration / goal. Causal interventions are supposed to be interventions that “affect nothing but what’s explicitly said to be affected”. We could try to describe actions in this way, carefully carving out exactly what’s affected and what’s not; and we find that we can’t do this, and so causal counterfactuals aren’t, and maybe can’t possibly, be a good description (e.g. because of Newcomb-like problems). But instead we could view them as promises: if I manage to “do X and only X” then exactly such and such effects result. In real life if I actually do X there will be other effects, but they must result from me having done something other than just exactly X. This seems related to the way in which humans know how to express preferences data-efficiently, e.g. “just duplicate this strawberry, don’t do any crazy other stuff”.)
Causal interventions are supposed to be interventions that “affect nothing but what’s explicitly said to be affected”.
This seems like a really bad description to me. For example, suppose we have the causal graph x→y→z. We intervene on y. We don’t want to “affect nothing but y”—we affect z, too. But we don’t get to pick and choose; we couldn’t choose to affect x and y without affecting z.
So I’d rather say that we “affect nothing but what we intervene on and what’s downstream of what we intervened on”.
Not sure whether this has anything to do with your point, though.
So I’d rather say that we “affect nothing but what we intervene on and what’s downstream of what we intervened on”.
A fair clarification.
Not sure whether this has anything to do with your point, though.
My point is very tangential to your post: you’re talking about decision theory as top-level naturalized ways of making decisions, and I’m talking about some non-top-level intuitions that could be called CDT-like. (This maybe should’ve been a comment on your Dutch book post.) I’m trying to contrast the aspirational spirit of CDT, understood as “make it so that there’s such a thing as ‘all of what’s downstream of what we intervened on’ and we know about it”, with descriptive CDT, “there’s such a thing as ‘all of what’s downstream of what we intervened on’ and we can know about it”. Descriptive CDT is only sort of right in some contexts, and can’t be right in some contexts; there’s no fully general Arcimedean point from which we intervene.
We can make some things more CDT-ish though, if that’s useful. E.g. we could think more about how our decisions have effects, so that we have in view more of what’s downstream of decisions. Or e.g. we could make our decisions have fewer effects, for example by promising to later reevaluate some algorithm for making judgements, instead of hiding within our decision to do X also our decision to always use the piece-of-algorithm that (within some larger mental context) decided to do X. That is, we try to hold off on decisions that have downstream effects we don’t understand well yet.
(Side note: There’s an aspect to the notion of “causal counterfactual” that I think it’s worth distinguishing from what’s discussed here. This post seems to take causal counterfactuals to be a description of top-level decision reasoning. A different meaning is that causal counterfactuals refer to an aspiration / goal. Causal interventions are supposed to be interventions that “affect nothing but what’s explicitly said to be affected”. We could try to describe actions in this way, carefully carving out exactly what’s affected and what’s not; and we find that we can’t do this, and so causal counterfactuals aren’t, and maybe can’t possibly, be a good description (e.g. because of Newcomb-like problems). But instead we could view them as promises: if I manage to “do X and only X” then exactly such and such effects result. In real life if I actually do X there will be other effects, but they must result from me having done something other than just exactly X. This seems related to the way in which humans know how to express preferences data-efficiently, e.g. “just duplicate this strawberry, don’t do any crazy other stuff”.)
I’m not really sure what you’re getting at.
This seems like a really bad description to me. For example, suppose we have the causal graph x→y→z. We intervene on y. We don’t want to “affect nothing but y”—we affect z, too. But we don’t get to pick and choose; we couldn’t choose to affect x and y without affecting z.
So I’d rather say that we “affect nothing but what we intervene on and what’s downstream of what we intervened on”.
Not sure whether this has anything to do with your point, though.
A fair clarification.
My point is very tangential to your post: you’re talking about decision theory as top-level naturalized ways of making decisions, and I’m talking about some non-top-level intuitions that could be called CDT-like. (This maybe should’ve been a comment on your Dutch book post.) I’m trying to contrast the aspirational spirit of CDT, understood as “make it so that there’s such a thing as ‘all of what’s downstream of what we intervened on’ and we know about it”, with descriptive CDT, “there’s such a thing as ‘all of what’s downstream of what we intervened on’ and we can know about it”. Descriptive CDT is only sort of right in some contexts, and can’t be right in some contexts; there’s no fully general Arcimedean point from which we intervene.
We can make some things more CDT-ish though, if that’s useful. E.g. we could think more about how our decisions have effects, so that we have in view more of what’s downstream of decisions. Or e.g. we could make our decisions have fewer effects, for example by promising to later reevaluate some algorithm for making judgements, instead of hiding within our decision to do X also our decision to always use the piece-of-algorithm that (within some larger mental context) decided to do X. That is, we try to hold off on decisions that have downstream effects we don’t understand well yet.