It seems plausible to me that any example I’ve seen so far which seems to require causal/counterfactual reasoning is more properly solved by taking the right updateless perspective, and taking the action or policy which achieves maximum expected utility from that perspective. If this were the right view, then the aim would be to construct something like updateless EDT.
I give a variant of the smoking lesion problem which overcomes an objection to the classic smoking lesion, and which is solved correctly by CDT, but which is not solved by updateless EDT.
UDT as originally described involved a “mathematical intuition module” which would take some sort of logical counterfactual. However, I’ll be using the term “updateless” purely to describe the decision theory you get by asking another decision theory to choose a policy as soon as it is born, rather than using that decision theory all along. Hence, updateless CDT is what you get when you ask a CDT agent to choose a policy; updateless EDT is what you get when you ask an EDT agent to choose a policy.
I’ll also be treating “counterfactual” as synonymous with “causal”. There are cases where physical causal reasoning seem to give the wrong counterfactual structure, like Newcomb’s problem. I won’t be trying to solve that problem here; I’m more trying to ask whether there are any cases where causal/counterfactual reasoning looks like what we really want at all.
The “common wisdom”, as I have observed it, is that we should be aiming to construct something like an updateless CDT which works well with logical uncertainty. I’m not sure whether that would be the dominant opinion right now, but certainly TDT set things in this direction early on. From my perspective, I don’t think it’s been adequately established that we should prefer updateless CDT to updateless EDT; providing some evidence on that is the implicit aim of this post. Explicitly, I’ll mostly be contrasting updateful CDT with updateful EDT.
It might be impossible to construct an appropriate logically-updateless perspective, in which case we need logical counterfactuals to compute the effects of actions/policies; but, in some sense this would only be because we couldn’t make a sufficiently ignorant prior. (I think of my own attempt at logically updateless decisions this way.) However, that would be unfortunate; it should be easier to construct good counterfactuals if we have a stronger justification for counterfactual reasoning being what we really want. Hence, another aim of this post is to help provide constraints on what good counterfactual reasoning should look like, by digging into reasons to want counterfactuals.
Thanks go to Alex Mennen, Evan Lloyd, and Daniel Demski for conversations sharpening these ideas.
Why did anyone think CDT was a good idea?
The original reasons for preferring CDT to EDT are largely suspect, falling to the “Why Ain’cha Rich?” objection. From the LessWrong/MIRI perspective, it’s quite surprising that Newcomb’s Problem was the original motivation for CDT, when we now use it as a point against. This is not the only example. The SEP article on CDT gives Prisoner’s Dilemma as the first example of CDT’s importance, pointing out that EDT cooperates with a copy of itself in PD because unlike CDT it fails to take into account that cooperating is “auspicious but not efficacious”.
The Smoking Lesion problem isn’t vulnerable to “Why Ain’cha Rich?”, and so has been a more popular justification for CDT in the LessWrong-sphere. However, I don’t think it provides a good justification for CDT at all. The problem is ill-posed: it is assumed that those with a smoking lesion are more likely to smoke, but this is inconsistent with their being EDT agents (who are not inclined to smoke given the problem setup). (Cheating Death in Damascus) points out than Murder Lesion is ill-posed due to similar problems.)
So, for some time, I have thought that the main effective argument against EDT and for CDT was XOR blackmail. However, XOR blackmail is also solved by updateless EDT. We want to go updateless either way, so this doesn’t give us reason to favor CDT over EDT.
Regardless, I’m quite sympathetic to the intuition behind CDT, namely that it’s important to consider the counterfactual consequences of your actions rather than just the conditional expected utilities. Furthermore, the following idea (which I think I got from an academic paper on CDT, but haven’t been able to track down which one) seems at least plausible to me:
If we were dealing with ideal decision agents, who could condition on all the inputs to their decision process, CDT would equal EDT. However, imperfect agents often cannot condition on all their inputs. In this situation, EDT will get things wrong. CDT corrects this error by cutting the relationships which would be screened off if only we could condition on those inputs.
This intuition might seem odd given all the cases where CDT doesn’t do so well. If CDT fails Newcomb’s problem and EDT doesn’t, it seems CDT is at best a hack which repairs some cases fitting the above description at the expense of damaging performance in other cases. Perhaps this is the right perspective. But, we could also think of Newcomblike decision problems as cases where the classical causal structure is just wrong. With the right causal structure, we might postulate, CDT would always be as good as or better than EDT. This is part of what TDT/FDT is.
I’ll charitably assume that we can find appropriate causal structures. The question is, even given that, can we make any sense of the argument for CDT? Smoking Lesion was supposed to be an example of this, but the problem was ill-posed. So, can we repair it?
A Smoking Lesion Steelman
Agents who Don’t Know their Utility Function
I’ll be assuming a very particular form of utility function ignorance. Suppose that agents do not have access to their own source code. Furthermore, whether CDT or EDT, the agents have an “epistemic module” which holds the world-model: either just the probability distribution (for EDT) or the probability distribution plus causal beliefs. This epistemic module is ignorant of the utility function. However, a “decision module” uses what the epistemic module knows, together with the utility function, to calculate the value of each possible action (in the CDT or EDT sense).
These agents also lack introspective capabilities of any kind. The epistemic module cannot watch the decision module calculate a utility and get information about the utility function that way. (This blocks things like the tickle defense).
These agents therefore have full access to their own utility functions for the sake of doing the usual decision-theoretic calculations. Nonetheless, they lack knowledge of their own utility function in the epistemic sense. They can only infer what their utility function might be from their actions. This may be difficult, because, as we shall see, EDT agents are sometimes motivated to act in a way which avoids giving information about the utility function.
While I admit this is simply a bad agent design, I don’t think it’s unrealistic as an extrapolation of current AI systems, or particularly bad as a model of (one aspect of) human ignorance about our own values.
More importantly, this is just supposed to be a toy example to illustrate what happens when an agent is epistemically ignorant of an important input to its decision process. It would be nice to have an example which doesn’t arise from an obviously bad agent design, but I don’t have one.
Smoking Robots
Now suppose that there are two types of robots which have been produced in equal quantities: robots who like smoking, and robots who are indifferent toward smoking. I’ll call these “smoke-lovers” and “non-smoke-lovers”. Smoke-lovers ascribe smoking +10 utility. Non-smoke-lovers assign smoking −1 due to the expense of obtaining something to smoke. Also, no robot wants to be destroyed; all robots ascribe this −100 utility.
There is a hunter who systematically destroys all smoke-loving robots, whether they choose to smoke or not. We can imagine that the robots have serial numbers, which they cannot remove or obscure. The hunter has a list of smoke-lover serial numbers, and so, can destroy all and only the smoke-lovers. The robots don’t have access to the list, so their own serial numbers tell them nothing.
So, the payoffs look like this:
Smoke-lover:
Smokes:
Killed: −90
Not killed: +10
Doesn’t smoke:
Killed: −100
Not killed: 0
Non-smoke-lover:
Smokes
Killed: −101
Not killed: −1
Doesn’t smoke:
Killed: −100
Not killed: 0
All robots know all of this. They just don’t know their own utility function (epistemically).
If we suppose that the robots are EDT agents with epsilon-exploration, what happens?
Non-smoke-lovers have no reason to ever smoke, so they’ll only smoke with probability epsilon. The smoke lovers are more complicated.
The expected utility for different actions depends on the frequency of those actions in the population of smoke-lovers and non-smoke-lovers, so there’s a Nash-equilibrium type solution. It couldn’t be that all agents choose not to smoke except epsilon often; then, smoking would provide no evidence, so the smoke-lovers would happily decide to smoke. However, it also can’t be that smoke-lovers smoke and non-smoke-lovers don’t, because then the conditional probability of being killed given that you smoke would be too high.
The equilibrium will be for smoke-lovers to smoke just a little more frequently than epsilon, in such a way as to equalize the EDT smoke-lover’s expected utility for smoking and not smoking. (We can imagine a very small amount of noise in agent’s utility calculations to explain how this mixed-strategy equilibrium is actually achieved.)
As with the original smoking lesion problem, this looks like a mistake on the part of EDT. Smoking does not increase a robot’s odds of being hunted down and killed. CDT smoke-lovers would choose to smoke.
Furthermore, this isn’t changed at all by trying updateless reasoning. There’s not really any more-ignorant position for an updateless agent to back off to, at least not one which would be helpful. So, it seems we really need CDT for this one.
What should we think of this?
I think the main question here is how this generalizes to other types of lack of self-knowledge. It’s quite plausible that any conclusions from this example depend on the details of my utility-ignorance model, which would mean we can fix things by avoiding utility-ignorance (rather than adopting CDT).
On the other hand, maybe there are less easily avoidable forms of self-ignorance which lead to similar conclusions. Perhaps the argument that CDT outperforms EDT in cases where EDT isn’t able to condition on all its inputs can be formalized. If so, it might even provide an argument which is persuasive to an EDT agent, which would be really interesting.
Smoking Lesion Steelman
It seems plausible to me that any example I’ve seen so far which seems to require causal/counterfactual reasoning is more properly solved by taking the right updateless perspective, and taking the action or policy which achieves maximum expected utility from that perspective. If this were the right view, then the aim would be to construct something like updateless EDT.
I give a variant of the smoking lesion problem which overcomes an objection to the classic smoking lesion, and which is solved correctly by CDT, but which is not solved by updateless EDT.
UDT as originally described involved a “mathematical intuition module” which would take some sort of logical counterfactual. However, I’ll be using the term “updateless” purely to describe the decision theory you get by asking another decision theory to choose a policy as soon as it is born, rather than using that decision theory all along. Hence, updateless CDT is what you get when you ask a CDT agent to choose a policy; updateless EDT is what you get when you ask an EDT agent to choose a policy.
I’ll also be treating “counterfactual” as synonymous with “causal”. There are cases where physical causal reasoning seem to give the wrong counterfactual structure, like Newcomb’s problem. I won’t be trying to solve that problem here; I’m more trying to ask whether there are any cases where causal/counterfactual reasoning looks like what we really want at all.
The “common wisdom”, as I have observed it, is that we should be aiming to construct something like an updateless CDT which works well with logical uncertainty. I’m not sure whether that would be the dominant opinion right now, but certainly TDT set things in this direction early on. From my perspective, I don’t think it’s been adequately established that we should prefer updateless CDT to updateless EDT; providing some evidence on that is the implicit aim of this post. Explicitly, I’ll mostly be contrasting updateful CDT with updateful EDT.
It might be impossible to construct an appropriate logically-updateless perspective, in which case we need logical counterfactuals to compute the effects of actions/policies; but, in some sense this would only be because we couldn’t make a sufficiently ignorant prior. (I think of my own attempt at logically updateless decisions this way.) However, that would be unfortunate; it should be easier to construct good counterfactuals if we have a stronger justification for counterfactual reasoning being what we really want. Hence, another aim of this post is to help provide constraints on what good counterfactual reasoning should look like, by digging into reasons to want counterfactuals.
Thanks go to Alex Mennen, Evan Lloyd, and Daniel Demski for conversations sharpening these ideas.
Why did anyone think CDT was a good idea?
The original reasons for preferring CDT to EDT are largely suspect, falling to the “Why Ain’cha Rich?” objection. From the LessWrong/MIRI perspective, it’s quite surprising that Newcomb’s Problem was the original motivation for CDT, when we now use it as a point against. This is not the only example. The SEP article on CDT gives Prisoner’s Dilemma as the first example of CDT’s importance, pointing out that EDT cooperates with a copy of itself in PD because unlike CDT it fails to take into account that cooperating is “auspicious but not efficacious”.
The Smoking Lesion problem isn’t vulnerable to “Why Ain’cha Rich?”, and so has been a more popular justification for CDT in the LessWrong-sphere. However, I don’t think it provides a good justification for CDT at all. The problem is ill-posed: it is assumed that those with a smoking lesion are more likely to smoke, but this is inconsistent with their being EDT agents (who are not inclined to smoke given the problem setup). (Cheating Death in Damascus) points out than Murder Lesion is ill-posed due to similar problems.)
So, for some time, I have thought that the main effective argument against EDT and for CDT was XOR blackmail. However, XOR blackmail is also solved by updateless EDT. We want to go updateless either way, so this doesn’t give us reason to favor CDT over EDT.
Regardless, I’m quite sympathetic to the intuition behind CDT, namely that it’s important to consider the counterfactual consequences of your actions rather than just the conditional expected utilities. Furthermore, the following idea (which I think I got from an academic paper on CDT, but haven’t been able to track down which one) seems at least plausible to me:
This intuition might seem odd given all the cases where CDT doesn’t do so well. If CDT fails Newcomb’s problem and EDT doesn’t, it seems CDT is at best a hack which repairs some cases fitting the above description at the expense of damaging performance in other cases. Perhaps this is the right perspective. But, we could also think of Newcomblike decision problems as cases where the classical causal structure is just wrong. With the right causal structure, we might postulate, CDT would always be as good as or better than EDT. This is part of what TDT/FDT is.
I’ll charitably assume that we can find appropriate causal structures. The question is, even given that, can we make any sense of the argument for CDT? Smoking Lesion was supposed to be an example of this, but the problem was ill-posed. So, can we repair it?
A Smoking Lesion Steelman
Agents who Don’t Know their Utility Function
I’ll be assuming a very particular form of utility function ignorance. Suppose that agents do not have access to their own source code. Furthermore, whether CDT or EDT, the agents have an “epistemic module” which holds the world-model: either just the probability distribution (for EDT) or the probability distribution plus causal beliefs. This epistemic module is ignorant of the utility function. However, a “decision module” uses what the epistemic module knows, together with the utility function, to calculate the value of each possible action (in the CDT or EDT sense).
These agents also lack introspective capabilities of any kind. The epistemic module cannot watch the decision module calculate a utility and get information about the utility function that way. (This blocks things like the tickle defense).
These agents therefore have full access to their own utility functions for the sake of doing the usual decision-theoretic calculations. Nonetheless, they lack knowledge of their own utility function in the epistemic sense. They can only infer what their utility function might be from their actions. This may be difficult, because, as we shall see, EDT agents are sometimes motivated to act in a way which avoids giving information about the utility function.
While I admit this is simply a bad agent design, I don’t think it’s unrealistic as an extrapolation of current AI systems, or particularly bad as a model of (one aspect of) human ignorance about our own values.
More importantly, this is just supposed to be a toy example to illustrate what happens when an agent is epistemically ignorant of an important input to its decision process. It would be nice to have an example which doesn’t arise from an obviously bad agent design, but I don’t have one.
Smoking Robots
Now suppose that there are two types of robots which have been produced in equal quantities: robots who like smoking, and robots who are indifferent toward smoking. I’ll call these “smoke-lovers” and “non-smoke-lovers”. Smoke-lovers ascribe smoking +10 utility. Non-smoke-lovers assign smoking −1 due to the expense of obtaining something to smoke. Also, no robot wants to be destroyed; all robots ascribe this −100 utility.
There is a hunter who systematically destroys all smoke-loving robots, whether they choose to smoke or not. We can imagine that the robots have serial numbers, which they cannot remove or obscure. The hunter has a list of smoke-lover serial numbers, and so, can destroy all and only the smoke-lovers. The robots don’t have access to the list, so their own serial numbers tell them nothing.
So, the payoffs look like this:
Smoke-lover:
Smokes:
Killed: −90
Not killed: +10
Doesn’t smoke:
Killed: −100
Not killed: 0
Non-smoke-lover:
Smokes
Killed: −101
Not killed: −1
Doesn’t smoke:
Killed: −100
Not killed: 0
All robots know all of this. They just don’t know their own utility function (epistemically).
If we suppose that the robots are EDT agents with epsilon-exploration, what happens?
Non-smoke-lovers have no reason to ever smoke, so they’ll only smoke with probability epsilon. The smoke lovers are more complicated.
The expected utility for different actions depends on the frequency of those actions in the population of smoke-lovers and non-smoke-lovers, so there’s a Nash-equilibrium type solution. It couldn’t be that all agents choose not to smoke except epsilon often; then, smoking would provide no evidence, so the smoke-lovers would happily decide to smoke. However, it also can’t be that smoke-lovers smoke and non-smoke-lovers don’t, because then the conditional probability of being killed given that you smoke would be too high.
The equilibrium will be for smoke-lovers to smoke just a little more frequently than epsilon, in such a way as to equalize the EDT smoke-lover’s expected utility for smoking and not smoking. (We can imagine a very small amount of noise in agent’s utility calculations to explain how this mixed-strategy equilibrium is actually achieved.)
As with the original smoking lesion problem, this looks like a mistake on the part of EDT. Smoking does not increase a robot’s odds of being hunted down and killed. CDT smoke-lovers would choose to smoke.
Furthermore, this isn’t changed at all by trying updateless reasoning. There’s not really any more-ignorant position for an updateless agent to back off to, at least not one which would be helpful. So, it seems we really need CDT for this one.
What should we think of this?
I think the main question here is how this generalizes to other types of lack of self-knowledge. It’s quite plausible that any conclusions from this example depend on the details of my utility-ignorance model, which would mean we can fix things by avoiding utility-ignorance (rather than adopting CDT).
On the other hand, maybe there are less easily avoidable forms of self-ignorance which lead to similar conclusions. Perhaps the argument that CDT outperforms EDT in cases where EDT isn’t able to condition on all its inputs can be formalized. If so, it might even provide an argument which is persuasive to an EDT agent, which would be really interesting.