Smoking Lesion Steelman
It seems plausible to me that any example I’ve seen so far which seems to require causal/counterfactual reasoning is more properly solved by taking the right updateless perspective, and taking the action or policy which achieves maximum expected utility from that perspective. If this were the right view, then the aim would be to construct something like updateless EDT.
I give a variant of the smoking lesion problem which overcomes an objection to the classic smoking lesion, and which is solved correctly by CDT, but which is not solved by updateless EDT.
UDT as originally described involved a “mathematical intuition module” which would take some sort of logical counterfactual. However, I’ll be using the term “updateless” purely to describe the decision theory you get by asking another decision theory to choose a policy as soon as it is born, rather than using that decision theory all along. Hence, updateless CDT is what you get when you ask a CDT agent to choose a policy; updateless EDT is what you get when you ask an EDT agent to choose a policy.
I’ll also be treating “counterfactual” as synonymous with “causal”. There are cases where physical causal reasoning seem to give the wrong counterfactual structure, like Newcomb’s problem. I won’t be trying to solve that problem here; I’m more trying to ask whether there are any cases where causal/counterfactual reasoning looks like what we really want at all.
The “common wisdom”, as I have observed it, is that we should be aiming to construct something like an updateless CDT which works well with logical uncertainty. I’m not sure whether that would be the dominant opinion right now, but certainly TDT set things in this direction early on. From my perspective, I don’t think it’s been adequately established that we should prefer updateless CDT to updateless EDT; providing some evidence on that is the implicit aim of this post. Explicitly, I’ll mostly be contrasting updateful CDT with updateful EDT.
It might be impossible to construct an appropriate logically-updateless perspective, in which case we need logical counterfactuals to compute the effects of actions/policies; but, in some sense this would only be because we couldn’t make a sufficiently ignorant prior. (I think of my own attempt at logically updateless decisions this way.) However, that would be unfortunate; it should be easier to construct good counterfactuals if we have a stronger justification for counterfactual reasoning being what we really want. Hence, another aim of this post is to help provide constraints on what good counterfactual reasoning should look like, by digging into reasons to want counterfactuals.
Thanks go to Alex Mennen, Evan Lloyd, and Daniel Demski for conversations sharpening these ideas.
Why did anyone think CDT was a good idea?
The original reasons for preferring CDT to EDT are largely suspect, falling to the “Why Ain’cha Rich?” objection. From the LessWrong/MIRI perspective, it’s quite surprising that Newcomb’s Problem was the original motivation for CDT, when we now use it as a point against. This is not the only example. The SEP article on CDT gives Prisoner’s Dilemma as the first example of CDT’s importance, pointing out that EDT cooperates with a copy of itself in PD because unlike CDT it fails to take into account that cooperating is “auspicious but not efficacious”.
The Smoking Lesion problem isn’t vulnerable to “Why Ain’cha Rich?”, and so has been a more popular justification for CDT in the LessWrong-sphere. However, I don’t think it provides a good justification for CDT at all. The problem is ill-posed: it is assumed that those with a smoking lesion are more likely to smoke, but this is inconsistent with their being EDT agents (who are not inclined to smoke given the problem setup). (Cheating Death in Damascus) points out than Murder Lesion is ill-posed due to similar problems.)
So, for some time, I have thought that the main effective argument against EDT and for CDT was XOR blackmail. However, XOR blackmail is also solved by updateless EDT. We want to go updateless either way, so this doesn’t give us reason to favor CDT over EDT.
Regardless, I’m quite sympathetic to the intuition behind CDT, namely that it’s important to consider the counterfactual consequences of your actions rather than just the conditional expected utilities. Furthermore, the following idea (which I think I got from an academic paper on CDT, but haven’t been able to track down which one) seems at least plausible to me:
If we were dealing with ideal decision agents, who could condition on all the inputs to their decision process, CDT would equal EDT. However, imperfect agents often cannot condition on all their inputs. In this situation, EDT will get things wrong. CDT corrects this error by cutting the relationships which would be screened off if only we could condition on those inputs.
This intuition might seem odd given all the cases where CDT doesn’t do so well. If CDT fails Newcomb’s problem and EDT doesn’t, it seems CDT is at best a hack which repairs some cases fitting the above description at the expense of damaging performance in other cases. Perhaps this is the right perspective. But, we could also think of Newcomblike decision problems as cases where the classical causal structure is just wrong. With the right causal structure, we might postulate, CDT would always be as good as or better than EDT. This is part of what TDT/FDT is.
I’ll charitably assume that we can find appropriate causal structures. The question is, even given that, can we make any sense of the argument for CDT? Smoking Lesion was supposed to be an example of this, but the problem was ill-posed. So, can we repair it?
A Smoking Lesion Steelman
Agents who Don’t Know their Utility Function
I’ll be assuming a very particular form of utility function ignorance. Suppose that agents do not have access to their own source code. Furthermore, whether CDT or EDT, the agents have an “epistemic module” which holds the world-model: either just the probability distribution (for EDT) or the probability distribution plus causal beliefs. This epistemic module is ignorant of the utility function. However, a “decision module” uses what the epistemic module knows, together with the utility function, to calculate the value of each possible action (in the CDT or EDT sense).
These agents also lack introspective capabilities of any kind. The epistemic module cannot watch the decision module calculate a utility and get information about the utility function that way. (This blocks things like the tickle defense).
These agents therefore have full access to their own utility functions for the sake of doing the usual decision-theoretic calculations. Nonetheless, they lack knowledge of their own utility function in the epistemic sense. They can only infer what their utility function might be from their actions. This may be difficult, because, as we shall see, EDT agents are sometimes motivated to act in a way which avoids giving information about the utility function.
While I admit this is simply a bad agent design, I don’t think it’s unrealistic as an extrapolation of current AI systems, or particularly bad as a model of (one aspect of) human ignorance about our own values.
More importantly, this is just supposed to be a toy example to illustrate what happens when an agent is epistemically ignorant of an important input to its decision process. It would be nice to have an example which doesn’t arise from an obviously bad agent design, but I don’t have one.
Smoking Robots
Now suppose that there are two types of robots which have been produced in equal quantities: robots who like smoking, and robots who are indifferent toward smoking. I’ll call these “smoke-lovers” and “non-smoke-lovers”. Smoke-lovers ascribe smoking +10 utility. Non-smoke-lovers assign smoking −1 due to the expense of obtaining something to smoke. Also, no robot wants to be destroyed; all robots ascribe this −100 utility.
There is a hunter who systematically destroys all smoke-loving robots, whether they choose to smoke or not. We can imagine that the robots have serial numbers, which they cannot remove or obscure. The hunter has a list of smoke-lover serial numbers, and so, can destroy all and only the smoke-lovers. The robots don’t have access to the list, so their own serial numbers tell them nothing.
So, the payoffs look like this:
Smoke-lover:
Smokes:
Killed: −90
Not killed: +10
Doesn’t smoke:
Killed: −100
Not killed: 0
Non-smoke-lover:
Smokes
Killed: −101
Not killed: −1
Doesn’t smoke:
Killed: −100
Not killed: 0
All robots know all of this. They just don’t know their own utility function (epistemically).
If we suppose that the robots are EDT agents with epsilon-exploration, what happens?
Non-smoke-lovers have no reason to ever smoke, so they’ll only smoke with probability epsilon. The smoke lovers are more complicated.
The expected utility for different actions depends on the frequency of those actions in the population of smoke-lovers and non-smoke-lovers, so there’s a Nash-equilibrium type solution. It couldn’t be that all agents choose not to smoke except epsilon often; then, smoking would provide no evidence, so the smoke-lovers would happily decide to smoke. However, it also can’t be that smoke-lovers smoke and non-smoke-lovers don’t, because then the conditional probability of being killed given that you smoke would be too high.
The equilibrium will be for smoke-lovers to smoke just a little more frequently than epsilon, in such a way as to equalize the EDT smoke-lover’s expected utility for smoking and not smoking. (We can imagine a very small amount of noise in agent’s utility calculations to explain how this mixed-strategy equilibrium is actually achieved.)
As with the original smoking lesion problem, this looks like a mistake on the part of EDT. Smoking does not increase a robot’s odds of being hunted down and killed. CDT smoke-lovers would choose to smoke.
Furthermore, this isn’t changed at all by trying updateless reasoning. There’s not really any more-ignorant position for an updateless agent to back off to, at least not one which would be helpful. So, it seems we really need CDT for this one.
What should we think of this?
I think the main question here is how this generalizes to other types of lack of self-knowledge. It’s quite plausible that any conclusions from this example depend on the details of my utility-ignorance model, which would mean we can fix things by avoiding utility-ignorance (rather than adopting CDT).
On the other hand, maybe there are less easily avoidable forms of self-ignorance which lead to similar conclusions. Perhaps the argument that CDT outperforms EDT in cases where EDT isn’t able to condition on all its inputs can be formalized. If so, it might even provide an argument which is persuasive to an EDT agent, which would be really interesting.
- Can you control the past? by 27 Aug 2021 19:39 UTC; 175 points) (
- Troll Bridge by 23 Aug 2019 18:36 UTC; 86 points) (
- Dutch-Booking CDT: Revised Argument by 27 Oct 2020 4:31 UTC; 51 points) (
- Can you control the past? by 27 Aug 2021 19:34 UTC; 46 points) (EA Forum;
- MIRI’s 2017 Fundraiser by 7 Dec 2017 21:47 UTC; 27 points) (
- 4 Sep 2019 5:52 UTC; 25 points) 's comment on Embedded Agency via Abstraction by (
- MIRI’s 2017 Fundraiser by 1 Dec 2017 13:45 UTC; 19 points) (
- Stable Pointers to Value: An Agent Embedded in Its Own Utility Function by 17 Aug 2017 0:22 UTC; 15 points) (
- Agent Meta-Foundations and the Rocket Alignment Problem by 9 Apr 2019 11:33 UTC; 12 points) (
- Mixed-Strategy Ratifiability Implies CDT=EDT by 31 Oct 2017 5:56 UTC; 12 points) (
- Embedded vs. External Decision Problems by 5 Mar 2020 0:23 UTC; 8 points) (
- MIRI 2017 Fundraiser and Strategy Update by 1 Dec 2017 20:06 UTC; 6 points) (EA Forum;
- Counterfactual Mugging: Why should you pay? by 17 Dec 2019 22:16 UTC; 6 points) (
- 8 Apr 2019 22:07 UTC; 5 points) 's comment on Deconfusing Logical Counterfactuals by (
- 26 Sep 2017 10:38 UTC; 4 points) 's comment on Naturalized induction – a challenge for evidential and causal decision theory by (
- Smoking Lesion Steelman II by 2 Oct 2017 22:11 UTC; 2 points) (
- 10 Apr 2019 23:20 UTC; 2 points) 's comment on Deconfusing Logical Counterfactuals by (
- 13 Apr 2019 10:45 UTC; 2 points) 's comment on The Happy Dance Problem by (
- Mixed-Strategy Ratifiability Implies CDT=EDT by 15 Nov 2017 4:22 UTC; 1 point) (
I didn’t find the conclusion about the smoke-lovers and non-smoke-lovers obvious in the EDT case at first glance, so I added in some numbers and ran through the calculations that the robots will do to see for myself and get a better handle on what not being able to introspect but still gaining evidence about your utility function actually looks like.
Suppose that, out of the N robots that have ever been built, nN are smoke-lovers and (1−n)N are non-smoke-lovers. Suppose also the smoke-lovers end up smoking with probability p and non-smoke-lovers end up smoking with probability q.
Then (pn+q(1−n))N robots smoke, and ((1−p)n+(1−q)(1−n))N robots don’t smoke. So by Bayes’ theorem, if a robot smokes, there is a pnpn+q(1−n) chance that it’s killed, and if a robot doesn’t smoke, there’s a (1−p)n1−(pn+q(1−n))chance that it’s killed.
Hence, the expected utilities are:
An EDT non-smoke-lover looks at the possibilities. It sees that if it smokes, it expects to get−101pnpn+q(1−n)−1(1−pnpn+q(1−n)) utilons, and that if it doesn’t smoke, it expects to get −100(1−p)n1−(pn+q(1−n)) utilons.
An EDT smoke-lover looks at the possibilities. It sees that if it smokes, it expects to get −90pnpn+q(1−n)+10(1−pnpn+q(1−n)) utilons, and if it doesn’t smoke, it expects to get −100(1−p)n1−(pn+q(1−n)) utilons.
Now consider some equilibria. Suppose that no non-smoke-lovers smoke, but some smoke-lovers smoke. So q=ε and p≫ε. So (taking limits as ε→0 along the way):
non-smoke-lovers expect to get −101 utilons if they smoke, and −100n−pn1−pn utilons if they don’t smoke.n<1 so non-smoke-lovers will choose not to smoke.
smoke-lovers expect to get −90 utilons if they smoke, and −100n−pn1−pn utilons if they don’t smoke. Smoke-lovers would be indifferent between the two if p=10−9n. This works fine if at least 90% of robots are smoke lovers, and equilibrium is achieved. But if less than 90% of robots are smoke-lovers, then there is no point at which they would be indifferent, and they will always choose not to smoke.
But wait! This is fine if more than 90% are smoke-lovers, but if fewer than 90% are smoke-lovers, then they would always choose not to smoke, that’s inconsistent with the assumption that p is much larger than ε. So instead suppose that p is only only a little bit bigger than ε=q, say that p=kε. Then:
non-smoke-lovers expect to get −100(k1+(k−1)n+1100n)n utilons if they smoke, and −100n utilons if they don’t smoke. They will choose to smoke if k<1+1101n−100n2, i.e. if smoke-lovers smoke so rarely that not smoking would make them believe they’re a smoke-lover about to be killed by the blade runner.
smoke-lovers expect to get −100(k1+(k−1)n−110n)n utilons if they smoke, and −100n utilons if they don’t smoke. They are indifferent between these two when k=1+19n−10n2. This means that, when k is at the equilibrium point, non-smoke-lovers will not choose to smoke when fewer than 90% of robots are smoke-lovers, which is exactly when this regime applies.
I wrote a quick python simulation to check these conclusions, and it was the case that p=10−9n for 0.9<n<1, and p=(1+19n−10n2)ε for 0<n<0.9 there as well.
I agree with this.
I’d also be interested in finding such a problem.
I am not sure whether your smoking lesion steelman actually makes a decisive case against evidential decision theory. If an agent knows about their utility function on some level, but not on the epistemic level, then this can just as well be made into a counter-example to causal decision theory. For example, consider a decision problem with the following payoff matrix:
Smoke-lover:
Smokes:
Killed: 10
Not killed: −90
Doesn’t smoke:
Killed: 0
Not killed: 0
Non-smoke-lover:
Smokes:
Killed: −100
Not killed: −100
Doesn’t smoke:
Killed: 0
Not killed: 0
For some reason, the agent doesn’t care whether they live or die. Also, let’s say that smoking makes a smoke-lover happy, but afterwards, they get terribly sick and lose 100 utilons. So they would only smoke if they knew they were going to be killed afterwards. The non-smoke-lover doesn’t want to smoke in any case.
Now, smoke-loving evidential decision theorists rightly choose smoking: they know that robots with a non-smoke-loving utility function would never have any reason to smoke, no matter which probabilities they assign. So if they end up smoking, then this means they are certainly smoke-lovers. It follows that they will be killed, and conditional on that state, smoking gives 10 more utility than not smoking.
Causal decision theory, on the other hand, seems to recommend a suboptimal action. Let a1 be smoking, a2 not smoking, S1 being a smoke-lover, and S2 being a non-smoke-lover. Moreover, say the prior probability P(S1) is 0.5. Then, for a smoke-loving CDT bot, the expected utility of smoking is just
E[U|a1]=P(S1)⋅U(S1∧a1)+P(S2)⋅U(S2∧a1)=0.5⋅10+0.5⋅(−90)=−40,
which is less then the certain 0 utilons for a2. Assigning a credence of around 1 to P(S1|a1), a smoke-loving EDT bot calculates
E[U|a1]=P(S1|a1)⋅U(S1∧a1)+P(S2|a1)⋅U(S2∧a1)≈1⋅10+0⋅(−90)=10,
which is higher than the expected utility of a2.
The reason CDT fails here doesn’t seem to lie in a mistaken causal structure. Also, I’m not sure whether the problem for EDT in the smoking lesion steelman is really that it can’t condition on all its inputs. If EDT can’t condition on something, then EDT doesn’t account for this information, but this doesn’t seem to be a problem per se.
In my opinion, the problem lies in an inconsistency in the expected utility equations. Smoke-loving EDT bots calculate the probability of being a non-smoke-lover, but then the utility they get is actually the one from being a smoke-lover. For this reason, they can get some “back-handed” information about their own utility function from their actions. The agents basically fail to condition two factors of the same product on the same knowledge.
Say we don’t know our own utility function on an epistemic level. Ordinarily, we would calculate the expected utility of an action, both as smoke-lovers and as non-smoke-lovers, as follows:
E[U|a]=P(S1|a)⋅E[U|S1,a]+P(S2|a)⋅E[U|S2,a],
where, if U1 (U2) is the utility function of a smoke-lover (non-smoke-lover), E[U|Si,a] is equal to E[Ui|a]. In this case, we don’t get any information about our utility function from our own action, and hence, no Newcomb-like problem arises.
I’m unsure whether there is any causal decision theory derivative that gets my case (or all other possible cases in this setting) right. It seems like as long as the agent isn’t certain to be a smoke-lover from the start, there are still payoffs for which CDT would (wrongly) choose not to smoke.
I think that in that case, the agent shouldn’t smoke, and CDT is right, although there is side-channel information that can be used to come to the conclusion that the agent should smoke. Here’s a reframing of the provided payoff matrix that makes this argument clearer. (also, your problem as stated should have 0 utility for a nonsmoker imagining the situation where they smoke and get killed)
Let’s say that there is a kingdom which contains two types of people, good people and evil people, and a person doesn’t necessarily know which type they are. There is a magical sword enchanted with a heavenly aura, and if a good person wields the sword, it will guide them do heroic things, for +10 utility (according to a good person) and 0 utility (according to a bad person). However, if an evil person wields the sword, it will afflict them for the rest of their life with extreme itchiness, for −100 utility (according to everyone).
good person’s utility estimates:
takes sword
I’m good: 10
I’m evil: −90
don’t take sword: 0
evil person’s utility estimates:
takes sword
I’m good: 0
I’m evil: −100
don’t take sword: 0
As you can clearly see, this is the exact same payoff matrix as the previous example. However, now it’s clear that if a (secretly good) CDT agent believes that most of society is evil, then it’s a bad idea to pick up the sword, because the agent is probably evil (according to the info they have) and will be tormented with itchiness for the rest of their life, and if it believes that most of society is good, then it’s a good idea to pick up the sword. Further, this situation is intuitively clear enough to argue that CDT just straight-up gets the right answer in this case.
A human (with some degree of introspective power) in this case, could correctly reason “oh hey I just got a little warm fuzzy feeling upon thinking of the hypothetical where I wield the sword and it doesn’t curse me. This is evidence that I’m good, because an evil person would not have that response, so I can safely wield the sword and will do so”.
However, what the human is doing in this case is using side-channel information that isn’t present in the problem description. They’re directly experiencing sense data as a result of the utility calculation outputting 10 in that hypothetical, and updating on that. In a society where everyone was really terrible at introspection so the only access they had to their decision algorithm was seeing their actual decision, (and assuming no previous decision problems that good and evil people decide differently on so the good person could learn that they were good by their actions), it seems to me like there’s a very intuitively strong case for not picking up the sword/not smoking.
Excellent example.
It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.
Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.
First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.
Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:
Gain(a)=E(U|a)−E(U|a,do(¯a))
(E(U|a,do(¯a)) stands for the expectation on utility which you get by first Bayes-conditioning on a, then causal-conditioning on its opposite.)
The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as −10. Therefore, the smoke-lover smokes.
(This only really shows the consistency of an equilibrium where the smoke-lover smokes—my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)
In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.
The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.
Thanks for your answer! This “gain” approach seems quite similar to what Wedgwood (2013) has proposed as “Benchmark Theory”, which behaves like CDT in cases with, but more like EDT in cases without causally dominant actions. My hunch would be that one might be able to construct a series of thought-experiments in which such a theory violates transitivity of preference, as demonstrated by Ahmed (2012).
I don’t understand how you arrive at a gain of 0 for not smoking as a smoke-lover in my example. I would think the gain for not smoking is higher:
Gain(a2)=E[U|a2]−E[U|a2,do(a1)]=P(S1|a2)⋅U(S1∧a2)+P(S2|a2)⋅U(S2∧a2)−P(S1|a2)⋅U(S1∧a1)−P(S2|a2)⋅U(S2∧a1)
=P(S1|a2)⋅−10+P(S2|a2)⋅90=P(S1|a2)⋅−100+90.
So as long as P(S1|a2)<0.8, the gain of not smoking is actually higher than that of smoking. For example, given prior probabilities of 0.5 for either state, the equilibrium probability of being a smoke-lover given not smoking will be 0.5 at most (in the case in which none of the smoke-lovers smoke).
Ah, you’re right. So gain doesn’t achieve as much as I thought it did. Thanks for the references, though. I think the idea is also similar in spirit to a proposal of Jeffrey’s in him book The Logic of Decision; he presents an evidential theory, but is as troubled by cooperating in prisoner’s dilemma and one-boxing in Newcomb’s problem as other decision theorists. So, he suggests that a rational agent should prefer actions such that, having updated on probably taking that action rather than another, you still prefer that action. (I don’t remember what he proposed for cases when no such action is available.) This has a similar structure of first updating on a potential action and then checking how alternatives look from that position.
The claim that “this isn’t changed at all by trying updateless reasoning” depends on the assumptions about updateless reasoning. If the agent chooses a policy in the form of a self-sufficient program, then you are right. On the other hand, if the agent chooses a policy in the form of a program with oracle access to the “utility estimator,” then there is an equilibrium where both smoke-lovers and non-smoke-lovers self-modify into CDT. Admittedly, there are also “bad” equilibria, e.g. non-smoke-lovers staying with EDT and smoke-lovers choosing between EDT and CDT with some probability. However, it seems arguable that the presence of bad equilibria is due to the “degenerate” property of the problem that one type of agents have incentives to move away from EDT whereas another type has exactly zero such incentives.
The non-smoke-loving agents think of themselves as having a negative incentive to switch to CDT in that case. They think that if they build a CDT agent with oracle access to their true reward function, they may smoke (since they don’t know what their true reward function is). So I don’t think there’s an equilibrium there. The non-smoke-lovers would prefer to explicitly give a CDT successor a non-smoke-loving utility function, if they wanted to switch to CDT. But then, this action itself would give evidence of their own true utility function, likely counter-balancing any reason to switch to CDT.
I was wondering about what happens if the agents try to write a strategy for switching between using such a utility oracle and a hand-written utility function (which would in fact be the same function, since they prefer their own utility function). But this probably doesn’t do anything nice either, since a useful choice of policy their would also reveal too much information about motives.
Yeah, you’re right. This setting is quite confusing :) In fact, if your agent doesn’t commit to a policy once and for all, things get pretty weird because it doesn’t trust its future-self.
I like this line of inquiry; it seems like being very careful about the justification for CDT will probably give a much clearer sense of what we actually want out of “causal” structure for logical facts.
First of all, it seems to me that “updateless CDT” and “updateless EDT” are the same for agents with access to their own internal states immediately prior to the decision theory computation: on an appropriate causal graph such internal states would be the only nodes with arrows leading to the nodes “output of decision theory”, so if their value is known, then severing those arrows does not affect the computation for updating on an observation of the value of the “output of decision theory” node. So the counterfactual and conditional probability distributions are the same, and thus CDT and EDT are the same.
Anyway, here is another example where CDT is superior to EDT, without obviously bad agent design. Suppose an agent needs to decide whether to undertake a mission that may either succeed or fail, with utilities:
Not trying: 0
Trying and failing: −1000
Trying and succeeding: 100
The agent initially believes that its probability of success is 50%, but it can perform an expensive computation (-10 utility) to update this probability to either 1% or 99%. In any decision theory, if the new probability is 1% then it will not try, and if the new probability is 99% then it will try. However, it also has to make the decision whether to perform the computation in the first place, and if not, whether to try anyway or not. Under CDT, the choice is easy: computation gives an expected utility of
0.5(0) + 0.5(0.99(100) − 0.01(1000)) − 10 = 34.5
trying without computation gives an expected utility of
0.5(100) − 0.5(1000) = −450
and not trying without computation gives a utility of 0, so the agent performs the computation.
Under EDT, the equilibrium solution cannot be to always perform the computation. Indeed, if it were so, then the expected utility for trying without computation would be
0.99(100) − 0.01(1000) = 89
while the other two expected utilities would be the same, so the agent would try without computation. (If the agent observes itself trying, it infers that it must have done so because it computed the probability as 99%, and thus the probability of success must be 99%.)
The actual equilibrium would likely be to usually run the computation, but sometimes try without computation. But it is irrelevant, the point is that EDT recommends something different than the correct CDT solution.
Note that the computation cost is in fact irrelevant to the example, I only introduced it to motivate that a decision needs to be made about whether to make the computation or not. In fact, EDT would recommend a non-optimal solution even if there were no cost associated to running the computation.
As before, the key feature of this example is that while computing expected utilities, the agent does not have access to information about its internal states prior to choosing its action: it must choose to either update its priors about them using information from its counterfactual action (EDT) or sever this causal connection before updating (CDT).
As far as I can tell, there is no analogous setup where EDT is preferable to CDT.
Note: The reason the idea of this example can’t be used in the setup of the OP is that in the OP, the inaccessible variable (the utility function) is actually used in the decision theory computation, whereas in my example the inaccessible variable (the probability of success) is inaccessible because it is hard to compute, so it wouldn’t make sense to use it in the computation.
I don’t think “any appropriate causal graph” necessarily has the structure you suggest. (We don’t have a good idea for what causal graphs on logical uncertainty look like.) It’s plausible that your assertion is true, but not obvious.
EDT isn’t nearly this bad. I think a lot of people have this idea that EDT goes around wagging tails of dogs to try to make the dogs happy. But, EDT doesn’t condition on the dog’s tail wagging: it conditions on personally wagging the dog’s tail, which has no a priori reason to be correlated with the dog’s happiness.
Similarly, EDT doesn’t just condition on “trying”: it conditions on everything it knows, including that it hasn’t yet performed the computation. The only equilibrium solution will be for the AI to run the computation every time except on exploration rounds. It sees that it does quite poorly on the exploration rounds where it tries without running the computation, so it never chooses to do that.
My intuition is that if you are trying to draw causal graphs that do something other than draw arrows from x to f(x) (where f is something like a decision theory), then you are doing something wrong. But I guess I could be wrong about this. However, the point stands that it won’t be possible to distinguish CDT and EDT without either finding a plausible causal graph which doesn’t have the property I want, or talking about an agent that doesn’t have access to its own internal states immediately prior to the decision theory computation (and it seems reasonable to say that such an agent is badly designed).
After thinking more specifically about my example, I think it is based on mistakenly conflating two conceptual divisions of the process in question into action vs computation. I think it is still good as an example of approximately how badly designed an agent needs to be before CDT and EDT become distinct.