It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.
Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.
First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.
Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:
Gain(a)=E(U|a)−E(U|a,do(¯a))
(E(U|a,do(¯a)) stands for the expectation on utility which you get by first Bayes-conditioning on a, then causal-conditioning on its opposite.)
The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as −10. Therefore, the smoke-lover smokes.
(This only really shows the consistency of an equilibrium where the smoke-lover smokes—my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)
In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.
The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.
Thanks for your answer! This “gain” approach seems quite similar to what Wedgwood (2013) has proposed as “Benchmark Theory”, which behaves like CDT in cases with, but more like EDT in cases without causally dominant actions. My hunch would be that one might be able to construct a series of thought-experiments in which such a theory violates transitivity of preference, as demonstrated by Ahmed (2012).
I don’t understand how you arrive at a gain of 0 for not smoking as a smoke-lover in my example. I would think the gain for not smoking is higher:
So as long as P(S1|a2)<0.8, the gain of not smoking is actually higher than that of smoking. For example, given prior probabilities of 0.5 for either state, the equilibrium probability of being a smoke-lover given not smoking will be 0.5 at most (in the case in which none of the smoke-lovers smoke).
Ah, you’re right. So gain doesn’t achieve as much as I thought it did. Thanks for the references, though. I think the idea is also similar in spirit to a proposal of Jeffrey’s in him book The Logic of Decision; he presents an evidential theory, but is as troubled by cooperating in prisoner’s dilemma and one-boxing in Newcomb’s problem as other decision theorists. So, he suggests that a rational agent should prefer actions such that, having updated on probably taking that action rather than another, you still prefer that action. (I don’t remember what he proposed for cases when no such action is available.) This has a similar structure of first updating on a potential action and then checking how alternatives look from that position.
Excellent example.
It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.
Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.
First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.
Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:
Gain(a)=E(U|a)−E(U|a,do(¯a))
(E(U|a,do(¯a)) stands for the expectation on utility which you get by first Bayes-conditioning on a, then causal-conditioning on its opposite.)
The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as −10. Therefore, the smoke-lover smokes.
(This only really shows the consistency of an equilibrium where the smoke-lover smokes—my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)
In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.
The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.
Thanks for your answer! This “gain” approach seems quite similar to what Wedgwood (2013) has proposed as “Benchmark Theory”, which behaves like CDT in cases with, but more like EDT in cases without causally dominant actions. My hunch would be that one might be able to construct a series of thought-experiments in which such a theory violates transitivity of preference, as demonstrated by Ahmed (2012).
I don’t understand how you arrive at a gain of 0 for not smoking as a smoke-lover in my example. I would think the gain for not smoking is higher:
Gain(a2)=E[U|a2]−E[U|a2,do(a1)]=P(S1|a2)⋅U(S1∧a2)+P(S2|a2)⋅U(S2∧a2)−P(S1|a2)⋅U(S1∧a1)−P(S2|a2)⋅U(S2∧a1)
=P(S1|a2)⋅−10+P(S2|a2)⋅90=P(S1|a2)⋅−100+90.
So as long as P(S1|a2)<0.8, the gain of not smoking is actually higher than that of smoking. For example, given prior probabilities of 0.5 for either state, the equilibrium probability of being a smoke-lover given not smoking will be 0.5 at most (in the case in which none of the smoke-lovers smoke).
Ah, you’re right. So gain doesn’t achieve as much as I thought it did. Thanks for the references, though. I think the idea is also similar in spirit to a proposal of Jeffrey’s in him book The Logic of Decision; he presents an evidential theory, but is as troubled by cooperating in prisoner’s dilemma and one-boxing in Newcomb’s problem as other decision theorists. So, he suggests that a rational agent should prefer actions such that, having updated on probably taking that action rather than another, you still prefer that action. (I don’t remember what he proposed for cases when no such action is available.) This has a similar structure of first updating on a potential action and then checking how alternatives look from that position.