From my perspective, I don’t think it’s been adequately established that we should prefer updateless CDT to updateless EDT
I agree with this.
It would be nice to have an example which doesn’t arise from an obviously bad agent design, but I don’t have one.
I’d also be interested in finding such a problem.
I am not sure whether your smoking lesion steelman actually makes a decisive case against evidential decision theory. If an agent knows about their utility function on some level, but not on the epistemic level, then this can just as well be made into a counter-example to causal decision theory. For example, consider a decision problem with the following payoff matrix:
Smoke-lover:
Smokes:
Killed: 10
Not killed: −90
Doesn’t smoke:
Killed: 0
Not killed: 0
Non-smoke-lover:
Smokes:
Killed: −100
Not killed: −100
Doesn’t smoke:
Killed: 0
Not killed: 0
For some reason, the agent doesn’t care whether they live or die. Also, let’s say that smoking makes a smoke-lover happy, but afterwards, they get terribly sick and lose 100 utilons. So they would only smoke if they knew they were going to be killed afterwards. The non-smoke-lover doesn’t want to smoke in any case.
Now, smoke-loving evidential decision theorists rightly choose smoking: they know that robots with a non-smoke-loving utility function would never have any reason to smoke, no matter which probabilities they assign. So if they end up smoking, then this means they are certainly smoke-lovers. It follows that they will be killed, and conditional on that state, smoking gives 10 more utility than not smoking.
Causal decision theory, on the other hand, seems to recommend a suboptimal action. Let a1 be smoking, a2 not smoking, S1 being a smoke-lover, and S2 being a non-smoke-lover. Moreover, say the prior probability P(S1) is 0.5. Then, for a smoke-loving CDT bot, the expected utility of smoking is just
The reason CDT fails here doesn’t seem to lie in a mistaken causal structure. Also, I’m not sure whether the problem for EDT in the smoking lesion steelman is really that it can’t condition on all its inputs. If EDT can’t condition on something, then EDT doesn’t account for this information, but this doesn’t seem to be a problem per se.
In my opinion, the problem lies in an inconsistency in the expected utility equations. Smoke-loving EDT bots calculate the probability of being a non-smoke-lover, but then the utility they get is actually the one from being a smoke-lover. For this reason, they can get some “back-handed” information about their own utility function from their actions. The agents basically fail to condition two factors of the same product on the same knowledge.
Say we don’t know our own utility function on an epistemic level. Ordinarily, we would calculate the expected utility of an action, both as smoke-lovers and as non-smoke-lovers, as follows:
E[U|a]=P(S1|a)⋅E[U|S1,a]+P(S2|a)⋅E[U|S2,a],
where, if U1 (U2) is the utility function of a smoke-lover (non-smoke-lover), E[U|Si,a] is equal to E[Ui|a]. In this case, we don’t get any information about our utility function from our own action, and hence, no Newcomb-like problem arises.
I’m unsure whether there is any causal decision theory derivative that gets my case (or all other possible cases in this setting) right. It seems like as long as the agent isn’t certain to be a smoke-lover from the start, there are still payoffs for which CDT would (wrongly) choose not to smoke.
I think that in that case, the agent shouldn’t smoke, and CDT is right, although there is side-channel information that can be used to come to the conclusion that the agent should smoke. Here’s a reframing of the provided payoff matrix that makes this argument clearer. (also, your problem as stated should have 0 utility for a nonsmoker imagining the situation where they smoke and get killed)
Let’s say that there is a kingdom which contains two types of people, good people and evil people, and a person doesn’t necessarily know which type they are. There is a magical sword enchanted with a heavenly aura, and if a good person wields the sword, it will guide them do heroic things, for +10 utility (according to a good person) and 0 utility (according to a bad person). However, if an evil person wields the sword, it will afflict them for the rest of their life with extreme itchiness, for −100 utility (according to everyone).
good person’s utility estimates:
takes sword
I’m good: 10
I’m evil: −90
don’t take sword: 0
evil person’s utility estimates:
takes sword
I’m good: 0
I’m evil: −100
don’t take sword: 0
As you can clearly see, this is the exact same payoff matrix as the previous example. However, now it’s clear that if a (secretly good) CDT agent believes that most of society is evil, then it’s a bad idea to pick up the sword, because the agent is probably evil (according to the info they have) and will be tormented with itchiness for the rest of their life, and if it believes that most of society is good, then it’s a good idea to pick up the sword. Further, this situation is intuitively clear enough to argue that CDT just straight-up gets the right answer in this case.
A human (with some degree of introspective power) in this case, could correctly reason “oh hey I just got a little warm fuzzy feeling upon thinking of the hypothetical where I wield the sword and it doesn’t curse me. This is evidence that I’m good, because an evil person would not have that response, so I can safely wield the sword and will do so”.
However, what the human is doing in this case is using side-channel information that isn’t present in the problem description. They’re directly experiencing sense data as a result of the utility calculation outputting 10 in that hypothetical, and updating on that. In a society where everyone was really terrible at introspection so the only access they had to their decision algorithm was seeing their actual decision, (and assuming no previous decision problems that good and evil people decide differently on so the good person could learn that they were good by their actions), it seems to me like there’s a very intuitively strong case for not picking up the sword/not smoking.
It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.
Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.
First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.
Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:
Gain(a)=E(U|a)−E(U|a,do(¯a))
(E(U|a,do(¯a)) stands for the expectation on utility which you get by first Bayes-conditioning on a, then causal-conditioning on its opposite.)
The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as −10. Therefore, the smoke-lover smokes.
(This only really shows the consistency of an equilibrium where the smoke-lover smokes—my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)
In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.
The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.
Thanks for your answer! This “gain” approach seems quite similar to what Wedgwood (2013) has proposed as “Benchmark Theory”, which behaves like CDT in cases with, but more like EDT in cases without causally dominant actions. My hunch would be that one might be able to construct a series of thought-experiments in which such a theory violates transitivity of preference, as demonstrated by Ahmed (2012).
I don’t understand how you arrive at a gain of 0 for not smoking as a smoke-lover in my example. I would think the gain for not smoking is higher:
So as long as P(S1|a2)<0.8, the gain of not smoking is actually higher than that of smoking. For example, given prior probabilities of 0.5 for either state, the equilibrium probability of being a smoke-lover given not smoking will be 0.5 at most (in the case in which none of the smoke-lovers smoke).
Ah, you’re right. So gain doesn’t achieve as much as I thought it did. Thanks for the references, though. I think the idea is also similar in spirit to a proposal of Jeffrey’s in him book The Logic of Decision; he presents an evidential theory, but is as troubled by cooperating in prisoner’s dilemma and one-boxing in Newcomb’s problem as other decision theorists. So, he suggests that a rational agent should prefer actions such that, having updated on probably taking that action rather than another, you still prefer that action. (I don’t remember what he proposed for cases when no such action is available.) This has a similar structure of first updating on a potential action and then checking how alternatives look from that position.
I agree with this.
I’d also be interested in finding such a problem.
I am not sure whether your smoking lesion steelman actually makes a decisive case against evidential decision theory. If an agent knows about their utility function on some level, but not on the epistemic level, then this can just as well be made into a counter-example to causal decision theory. For example, consider a decision problem with the following payoff matrix:
Smoke-lover:
Smokes:
Killed: 10
Not killed: −90
Doesn’t smoke:
Killed: 0
Not killed: 0
Non-smoke-lover:
Smokes:
Killed: −100
Not killed: −100
Doesn’t smoke:
Killed: 0
Not killed: 0
For some reason, the agent doesn’t care whether they live or die. Also, let’s say that smoking makes a smoke-lover happy, but afterwards, they get terribly sick and lose 100 utilons. So they would only smoke if they knew they were going to be killed afterwards. The non-smoke-lover doesn’t want to smoke in any case.
Now, smoke-loving evidential decision theorists rightly choose smoking: they know that robots with a non-smoke-loving utility function would never have any reason to smoke, no matter which probabilities they assign. So if they end up smoking, then this means they are certainly smoke-lovers. It follows that they will be killed, and conditional on that state, smoking gives 10 more utility than not smoking.
Causal decision theory, on the other hand, seems to recommend a suboptimal action. Let a1 be smoking, a2 not smoking, S1 being a smoke-lover, and S2 being a non-smoke-lover. Moreover, say the prior probability P(S1) is 0.5. Then, for a smoke-loving CDT bot, the expected utility of smoking is just
E[U|a1]=P(S1)⋅U(S1∧a1)+P(S2)⋅U(S2∧a1)=0.5⋅10+0.5⋅(−90)=−40,
which is less then the certain 0 utilons for a2. Assigning a credence of around 1 to P(S1|a1), a smoke-loving EDT bot calculates
E[U|a1]=P(S1|a1)⋅U(S1∧a1)+P(S2|a1)⋅U(S2∧a1)≈1⋅10+0⋅(−90)=10,
which is higher than the expected utility of a2.
The reason CDT fails here doesn’t seem to lie in a mistaken causal structure. Also, I’m not sure whether the problem for EDT in the smoking lesion steelman is really that it can’t condition on all its inputs. If EDT can’t condition on something, then EDT doesn’t account for this information, but this doesn’t seem to be a problem per se.
In my opinion, the problem lies in an inconsistency in the expected utility equations. Smoke-loving EDT bots calculate the probability of being a non-smoke-lover, but then the utility they get is actually the one from being a smoke-lover. For this reason, they can get some “back-handed” information about their own utility function from their actions. The agents basically fail to condition two factors of the same product on the same knowledge.
Say we don’t know our own utility function on an epistemic level. Ordinarily, we would calculate the expected utility of an action, both as smoke-lovers and as non-smoke-lovers, as follows:
E[U|a]=P(S1|a)⋅E[U|S1,a]+P(S2|a)⋅E[U|S2,a],
where, if U1 (U2) is the utility function of a smoke-lover (non-smoke-lover), E[U|Si,a] is equal to E[Ui|a]. In this case, we don’t get any information about our utility function from our own action, and hence, no Newcomb-like problem arises.
I’m unsure whether there is any causal decision theory derivative that gets my case (or all other possible cases in this setting) right. It seems like as long as the agent isn’t certain to be a smoke-lover from the start, there are still payoffs for which CDT would (wrongly) choose not to smoke.
I think that in that case, the agent shouldn’t smoke, and CDT is right, although there is side-channel information that can be used to come to the conclusion that the agent should smoke. Here’s a reframing of the provided payoff matrix that makes this argument clearer. (also, your problem as stated should have 0 utility for a nonsmoker imagining the situation where they smoke and get killed)
Let’s say that there is a kingdom which contains two types of people, good people and evil people, and a person doesn’t necessarily know which type they are. There is a magical sword enchanted with a heavenly aura, and if a good person wields the sword, it will guide them do heroic things, for +10 utility (according to a good person) and 0 utility (according to a bad person). However, if an evil person wields the sword, it will afflict them for the rest of their life with extreme itchiness, for −100 utility (according to everyone).
good person’s utility estimates:
takes sword
I’m good: 10
I’m evil: −90
don’t take sword: 0
evil person’s utility estimates:
takes sword
I’m good: 0
I’m evil: −100
don’t take sword: 0
As you can clearly see, this is the exact same payoff matrix as the previous example. However, now it’s clear that if a (secretly good) CDT agent believes that most of society is evil, then it’s a bad idea to pick up the sword, because the agent is probably evil (according to the info they have) and will be tormented with itchiness for the rest of their life, and if it believes that most of society is good, then it’s a good idea to pick up the sword. Further, this situation is intuitively clear enough to argue that CDT just straight-up gets the right answer in this case.
A human (with some degree of introspective power) in this case, could correctly reason “oh hey I just got a little warm fuzzy feeling upon thinking of the hypothetical where I wield the sword and it doesn’t curse me. This is evidence that I’m good, because an evil person would not have that response, so I can safely wield the sword and will do so”.
However, what the human is doing in this case is using side-channel information that isn’t present in the problem description. They’re directly experiencing sense data as a result of the utility calculation outputting 10 in that hypothetical, and updating on that. In a society where everyone was really terrible at introspection so the only access they had to their decision algorithm was seeing their actual decision, (and assuming no previous decision problems that good and evil people decide differently on so the good person could learn that they were good by their actions), it seems to me like there’s a very intuitively strong case for not picking up the sword/not smoking.
Excellent example.
It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.
Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.
First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.
Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:
Gain(a)=E(U|a)−E(U|a,do(¯a))
(E(U|a,do(¯a)) stands for the expectation on utility which you get by first Bayes-conditioning on a, then causal-conditioning on its opposite.)
The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as −10. Therefore, the smoke-lover smokes.
(This only really shows the consistency of an equilibrium where the smoke-lover smokes—my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)
In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.
The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.
Thanks for your answer! This “gain” approach seems quite similar to what Wedgwood (2013) has proposed as “Benchmark Theory”, which behaves like CDT in cases with, but more like EDT in cases without causally dominant actions. My hunch would be that one might be able to construct a series of thought-experiments in which such a theory violates transitivity of preference, as demonstrated by Ahmed (2012).
I don’t understand how you arrive at a gain of 0 for not smoking as a smoke-lover in my example. I would think the gain for not smoking is higher:
Gain(a2)=E[U|a2]−E[U|a2,do(a1)]=P(S1|a2)⋅U(S1∧a2)+P(S2|a2)⋅U(S2∧a2)−P(S1|a2)⋅U(S1∧a1)−P(S2|a2)⋅U(S2∧a1)
=P(S1|a2)⋅−10+P(S2|a2)⋅90=P(S1|a2)⋅−100+90.
So as long as P(S1|a2)<0.8, the gain of not smoking is actually higher than that of smoking. For example, given prior probabilities of 0.5 for either state, the equilibrium probability of being a smoke-lover given not smoking will be 0.5 at most (in the case in which none of the smoke-lovers smoke).
Ah, you’re right. So gain doesn’t achieve as much as I thought it did. Thanks for the references, though. I think the idea is also similar in spirit to a proposal of Jeffrey’s in him book The Logic of Decision; he presents an evidential theory, but is as troubled by cooperating in prisoner’s dilemma and one-boxing in Newcomb’s problem as other decision theorists. So, he suggests that a rational agent should prefer actions such that, having updated on probably taking that action rather than another, you still prefer that action. (I don’t remember what he proposed for cases when no such action is available.) This has a similar structure of first updating on a potential action and then checking how alternatives look from that position.