Like any decision theory, Causal Decision Theory (CDT) aims to maximize expected utility; it does this by looking at the causal effects each available action in a problem has. For example, in Problem 1, taking box A has the causal effect of earning $100, whereas taking box B causes you to earn $500. $500 is more than $100, so CDT says to take box B (like any decision theory worth anything should). Similarly, CDT advices to take box A in Problem 2.
CDT’s rule of looking at an action’s causal effects make sense: if you’re deciding which action to take, you want to know how your actions change the environment. And as we will see later, CDT correctly solves the problem of the Smoking Lesion. But first, we have to ask ourselves: what is causality?
What is causality?
A formal description of causality is beyond the purpose of this post (and sequence), but intuitively speaking, causality is about stuff that makes stuff happen. If I throw a glass vase on concrete, it will break; my action of throwing the vase caused it to break.
You may have heard that correlation doesn’t necessarily imply causality, which is true. For example, I’d bet hand size and foot size in humans strongly correlate: if we’d measure the hands and feet of a million people, those with larger hands will—on average—have larger feet as well, and vice versa. But hopefully we can agree hand size doesn’t have a causal effect on foot size, or vice versa: your hands aren’t large or small because your feet are large or small, even though we might be able to quite accurately predict your foot size using your hand size. Rather, hand size and foot size have common causes like genetics (determining how large a person can grow) and quality and quantity of food taken, etc.
Eliezer Yudkowsky describes causality in a the following very neat way:
There’s causality anywhere there’s a noun, a verb, and a subject.
“I broke the vase” and “John kicks the ball” are both examples of this.
With the hope the reader now has an intuitive notion of causality, we can move on to see how CDT handles Smoking Lesion.
Smoking Lesion
An agent is debating whether or not to smoke. She knows that smoking is correlated with an invariably fatal variety of lung cancer, but the correlation is (in this imaginary world) entirely due to a common cause: an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer. Agents without this lesion contract lung cancer only 1% of the time, and an agent can neither directly observe nor control whether she suffers from the lesion. The agent gains utility equivalent to $1,000 by smoking (regardless of whether she dies soon), and gains utility equivalent to $1,000,000 if she doesn’t die of cancer. Should she smoke, or refrain?
CDT says “yes”. The agent either gets lung cancer or not; having the lesion certainly increases the risk, but smoking doesn’t causally affect whether or not the agent has the lesion and has no direct causal effect on her probability of getting lung cancer either. CDT therefore reasons that whether you get the $1,000,000 in utility is beyond your control, but smoking simply gets you $1,000 more than not smoking. While smokers in this hypothetical world more often get lung cancer than non-smokers, this is because there are relatively more smokers in that part of the population that has the lesion, which is the cause of lung cancer. Smoking or not doesn’t change whether the agent is in that part of the population; CDT therefore (correctly) says the agent should smoke. The Smoking Lesion situation is actually similar to the hands and feet example above: where e.g. genetics cause people to have larger hands and feet, the Smoking Lesion causes people to have cancer and enjoy smoking.
CDT makes intuitive sense, and seems to solve problems correctly so far. However, it does have a major flaw, which will become apparent in Newcomb’s Problem.
Newcomb’s Problem
A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.
Box A is transparent and contains a thousand dollars. Box B is opaque, and contains either a million dollars, or nothing.
You can take both boxes, or take only box B.
And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.
Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)
Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.
Omega drops two boxes on the ground in front of you and flies off.
Do you take both boxes, or only box B?
(Note that “iff” means “if and only if.)
How does CDT approach this problem? Well, let’s look at the causal effects of taking both boxes (“two-boxing”) and taking one box (“one-boxing”).
First of all, note that Omega has already made its prediction. Your action now doesn’t causally affect this, as you can’t cause the past. Omega made its prediction and based upon it either filled box B or not. If box B isn’t filled, one-boxing gives you nothing; two-boxing, however, would give you the contents of box A, earning you $1,000. If box B is filled, one-boxing gets you $1,000,000. That’s pretty sweet, but two-boxing gets you $1,000,000 + $1,000 = $1,001,000. In both cases, two-boxing beats one-boxing by $1,000. CDT therefore two-boxes.
John, who is convinced by CDT-style reasoning, takes both boxes. Omega predicted he would, so John only gets $1,000. Had he one-boxed, Omega would have predicted that, giving him $1,000,000. If only he hadn’t followed CDT’s advice!
Is Omega even possible?
At this point, you may be wondering whether Newcomb’s Problem is relevant: is it even possible to make such accurate predictions of someone’s decision? There are two important points to note here.
First, yes, such accurate predictions might actually be possible, especially if you’re a robot: Omega could then have a copy—a model—of your decision-making software, which it feeds Newcomb’s Problem to see whether the model will one-box or two-box. Based on that, Omega predicts whether you will one-box or two-box, and fixes the contents of box B accordingly. Now, you’re not a robot, but future brain-scanning techniques might still make it possible to form an accurate model of your decision procedure.
The second point to make here is that predictions need not be this accurate in order to have a problem like Newcomb’s. If Omega could predict your action with only 60% accuracy (meaning its prediction is wrong 40% of the time), e.g. by giving you some tests first and examine the answers, the problem doesn’t fundamentally change. CDT would still two-box: given Omega’s prediction (whatever its accuracy is), two-boxing still earns you $1,000 more than one-boxing. But, of course, Omega’s prediction is connected to your decision: two-boxing gives you 0.6 probability of earning $1,000 (because Omega would have predicted you’d two-box with 0.6 accuracy) and 0.4 probability of getting $1,001,000 (the case where Omega is wrong in its prediction), whereas one-boxing would give you 0.6 probability of getting $1,000,000 and 0.4 probability of $0. This means two-boxing has an expected utility of 0.6 x $1,000 + 0.4 x $1,001,000 = $401,000, whereas the expected utility of one-boxing is 0.6 x $1,000,000 + 0.4 x $0 = $600,000. One-boxing still wins, and CDT still goes wrong.
You might be wondering about the exact difference between Newcomb’s Problem and Smoking Lesion: why does the author suggest to smoke on Smoking Lesion, while also saying one-boxing on Newcomb’s Problem is the better choice? After all, two-boxers indeed often find an empty box in Newcomb’s Problem—but isn’t it also true that smokers often get lung cancer in Smoking Lesion?
Yes. But the latter has nothing to do with the decision to smoke, whereas the former has everything to do with the decision to two-box. Let’s indeed assume Omega has a model of your decision procedure in order to make its prediction. Then whatever you decide, the model also decided (with perhaps a small error rate). This isn’t different than two calculators both returning “4” on “2 + 2“: if your calculator outputs “4” on “2 + 2”, you know that, when Fiona input “2 + 2” on her calculator a day earlier, hers must have output “4″ as well. It’s the same in Newcomb’s Problem: if you decide to two-box, so did Omega’s model of your decision procedure; similarly, if you decide to one-box, so did the model. Two-boxing then systematically leads to earning only $1,000, while one-boxing gets you $1,000,000. Your decision procedure is instantiated in two places: in your head and in Omega’s, and you can’t act as if your decision has no impact on Omega’s prediction.
In Smoking Lesion, smokers do often get lung cancer, but that’s “just” a statistical relation. Your decision procedure has no effect on the presence of the lesion and whether or not you get lung cancer; this lesion does give people a fondness of smoking, but the decision to smoke is still theirs and has no effect on getting lung cancer.
Note that, if we assume Omega doesn’t have a model of your decision procedure, two-boxing would be the better choice. For example, if, historically, people wearing brown shoes always one-boxed, Omega might base its prediction on that instead of on a model of your decision procedure. In that case, your decision doesn’t have an effect on Omega’s prediction, in which case two-boxing simply makes you $1,000 more than one-boxing.
Conclusion
So it turns out CDT doesn’t solve every problem correctly. In the next post, we will take a look at another decision theory: Evidential Decision Theory, and how it approaches Newcomb’s Problem.
An Intuitive Introduction to Causal Decision Theory
Like any decision theory, Causal Decision Theory (CDT) aims to maximize expected utility; it does this by looking at the causal effects each available action in a problem has. For example, in Problem 1, taking box A has the causal effect of earning $100, whereas taking box B causes you to earn $500. $500 is more than $100, so CDT says to take box B (like any decision theory worth anything should). Similarly, CDT advices to take box A in Problem 2.
CDT’s rule of looking at an action’s causal effects make sense: if you’re deciding which action to take, you want to know how your actions change the environment. And as we will see later, CDT correctly solves the problem of the Smoking Lesion. But first, we have to ask ourselves: what is causality?
What is causality?
A formal description of causality is beyond the purpose of this post (and sequence), but intuitively speaking, causality is about stuff that makes stuff happen. If I throw a glass vase on concrete, it will break; my action of throwing the vase caused it to break.
You may have heard that correlation doesn’t necessarily imply causality, which is true. For example, I’d bet hand size and foot size in humans strongly correlate: if we’d measure the hands and feet of a million people, those with larger hands will—on average—have larger feet as well, and vice versa. But hopefully we can agree hand size doesn’t have a causal effect on foot size, or vice versa: your hands aren’t large or small because your feet are large or small, even though we might be able to quite accurately predict your foot size using your hand size. Rather, hand size and foot size have common causes like genetics (determining how large a person can grow) and quality and quantity of food taken, etc.
Eliezer Yudkowsky describes causality in a the following very neat way:
“I broke the vase” and “John kicks the ball” are both examples of this.
With the hope the reader now has an intuitive notion of causality, we can move on to see how CDT handles Smoking Lesion.
Smoking Lesion
CDT says “yes”. The agent either gets lung cancer or not; having the lesion certainly increases the risk, but smoking doesn’t causally affect whether or not the agent has the lesion and has no direct causal effect on her probability of getting lung cancer either. CDT therefore reasons that whether you get the $1,000,000 in utility is beyond your control, but smoking simply gets you $1,000 more than not smoking. While smokers in this hypothetical world more often get lung cancer than non-smokers, this is because there are relatively more smokers in that part of the population that has the lesion, which is the cause of lung cancer. Smoking or not doesn’t change whether the agent is in that part of the population; CDT therefore (correctly) says the agent should smoke. The Smoking Lesion situation is actually similar to the hands and feet example above: where e.g. genetics cause people to have larger hands and feet, the Smoking Lesion causes people to have cancer and enjoy smoking.
CDT makes intuitive sense, and seems to solve problems correctly so far. However, it does have a major flaw, which will become apparent in Newcomb’s Problem.
Newcomb’s Problem
(Note that “iff” means “if and only if.)
How does CDT approach this problem? Well, let’s look at the causal effects of taking both boxes (“two-boxing”) and taking one box (“one-boxing”).
First of all, note that Omega has already made its prediction. Your action now doesn’t causally affect this, as you can’t cause the past. Omega made its prediction and based upon it either filled box B or not. If box B isn’t filled, one-boxing gives you nothing; two-boxing, however, would give you the contents of box A, earning you $1,000. If box B is filled, one-boxing gets you $1,000,000. That’s pretty sweet, but two-boxing gets you $1,000,000 + $1,000 = $1,001,000. In both cases, two-boxing beats one-boxing by $1,000. CDT therefore two-boxes.
John, who is convinced by CDT-style reasoning, takes both boxes. Omega predicted he would, so John only gets $1,000. Had he one-boxed, Omega would have predicted that, giving him $1,000,000. If only he hadn’t followed CDT’s advice!
Is Omega even possible?
At this point, you may be wondering whether Newcomb’s Problem is relevant: is it even possible to make such accurate predictions of someone’s decision? There are two important points to note here.
First, yes, such accurate predictions might actually be possible, especially if you’re a robot: Omega could then have a copy—a model—of your decision-making software, which it feeds Newcomb’s Problem to see whether the model will one-box or two-box. Based on that, Omega predicts whether you will one-box or two-box, and fixes the contents of box B accordingly. Now, you’re not a robot, but future brain-scanning techniques might still make it possible to form an accurate model of your decision procedure.
The second point to make here is that predictions need not be this accurate in order to have a problem like Newcomb’s. If Omega could predict your action with only 60% accuracy (meaning its prediction is wrong 40% of the time), e.g. by giving you some tests first and examine the answers, the problem doesn’t fundamentally change. CDT would still two-box: given Omega’s prediction (whatever its accuracy is), two-boxing still earns you $1,000 more than one-boxing. But, of course, Omega’s prediction is connected to your decision: two-boxing gives you 0.6 probability of earning $1,000 (because Omega would have predicted you’d two-box with 0.6 accuracy) and 0.4 probability of getting $1,001,000 (the case where Omega is wrong in its prediction), whereas one-boxing would give you 0.6 probability of getting $1,000,000 and 0.4 probability of $0. This means two-boxing has an expected utility of 0.6 x $1,000 + 0.4 x $1,001,000 = $401,000, whereas the expected utility of one-boxing is 0.6 x $1,000,000 + 0.4 x $0 = $600,000. One-boxing still wins, and CDT still goes wrong.
In fact, people’s microexpressions on their faces can give clues about what they will decide, making many real-life problems Newcomblike.
Newcomb’s Problem vs. Smoking Lesion
You might be wondering about the exact difference between Newcomb’s Problem and Smoking Lesion: why does the author suggest to smoke on Smoking Lesion, while also saying one-boxing on Newcomb’s Problem is the better choice? After all, two-boxers indeed often find an empty box in Newcomb’s Problem—but isn’t it also true that smokers often get lung cancer in Smoking Lesion?
Yes. But the latter has nothing to do with the decision to smoke, whereas the former has everything to do with the decision to two-box. Let’s indeed assume Omega has a model of your decision procedure in order to make its prediction. Then whatever you decide, the model also decided (with perhaps a small error rate). This isn’t different than two calculators both returning “4” on “2 + 2“: if your calculator outputs “4” on “2 + 2”, you know that, when Fiona input “2 + 2” on her calculator a day earlier, hers must have output “4″ as well. It’s the same in Newcomb’s Problem: if you decide to two-box, so did Omega’s model of your decision procedure; similarly, if you decide to one-box, so did the model. Two-boxing then systematically leads to earning only $1,000, while one-boxing gets you $1,000,000. Your decision procedure is instantiated in two places: in your head and in Omega’s, and you can’t act as if your decision has no impact on Omega’s prediction.
In Smoking Lesion, smokers do often get lung cancer, but that’s “just” a statistical relation. Your decision procedure has no effect on the presence of the lesion and whether or not you get lung cancer; this lesion does give people a fondness of smoking, but the decision to smoke is still theirs and has no effect on getting lung cancer.
Note that, if we assume Omega doesn’t have a model of your decision procedure, two-boxing would be the better choice. For example, if, historically, people wearing brown shoes always one-boxed, Omega might base its prediction on that instead of on a model of your decision procedure. In that case, your decision doesn’t have an effect on Omega’s prediction, in which case two-boxing simply makes you $1,000 more than one-boxing.
Conclusion
So it turns out CDT doesn’t solve every problem correctly. In the next post, we will take a look at another decision theory: Evidential Decision Theory, and how it approaches Newcomb’s Problem.