Are you sure that you need an advanced decision theory two handle the one-box/two-box problem, or the PD-with-mental-clone problem? You write that
a CDT agent assumes that X’s decision is independent from the simultaneous decisions of the Ys- that is, X could decide one way or another and everyone else’s decisions would stay the same.
Well, that’s a common situation analyzed in game theory, but it’s not essential to CDT. Consider playing a game of chess: your choice clearly affects the choice of your opponent. Or consider the decision of whether to punch a 6′5″, 250 lb. muscle-man who has just insulted you—your choice again has a strong influence on his choice of action. CDT is adequate for analyzing both of these situations.
It is true that in my two examples the other agent’s choice is made after X’s choice, rather than being simultaneous with his. But of what relevance is the stipulation of simultaneity? It’s only relevance is that it leads one to assume that the other decisions are independent of X’s decision! That is, the root of the difficulty is simply that you’re analyzing the problem using an assumption that you know to be false!
It seems to me that you can analyze the one-box/two-box problem or the PD-with-a-mental-clone problem perfectly well using CDT; you just have to use the right causal graph. The causal graph needs an arc from your decision to Omega’s prediction for the first problem, and an arc from your decision to the clone’s decision in the second problem. Then you do the usual maximization of expected utility.
Of course, in these two problems we know which causal links to draw. They were written to be simple enough. The trick is to have a general theory that draws the right links here without drawing wrong links in other problems, and which is formalizable so that it can answer problems more complicated than common sense can handle.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision- and you can certainly come up with examples where mutual ignorance happens.
Finally, situations with iterated moves can be decided differently by different decision theories as well: consider Newcomb’s Problem where the big box is transparent as well! A CDT will always find the big box empty, and two-box; a UDT/ADT will always find the big box full, and one-box. (TDT might two-box in that case, actually.)
Of course, in these two problems we know which causal links to draw. [...] The trick is to have a general theory that draws the right links here without drawing wrong links in other problems,
If you don’t know that Omega’s decision depends on yours, or that the other player in a Prisoner’s Dilemma is your mental clone, then no theory can help you make the right choice; you lack the crucial piece of information. If you do know this information, then simply cranking through standard maximization of expected utility gives you the right answer.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision
No, the relevant distinction is whether or not your decision is relevant to predicting (postdicting?) the other agent’s decision. The cheat in Newcomb’s Problem and the PD-with-a-clone problem is this:
you create an unusual situation where X’s decision is clearly relevant to predicting Y’s decision, even though X’s decision does not precede Y’s,
then you insist that X must pretend that there is no connection, even though he knows better, due to the lack of temporal precedence.
Let’s take a look at what happens in Newcomb’s problem if we just grind through the math. We have
P(box 2 has $1 million | you choose to take both boxes) = 0
P(box 2 has $1 million | you choose to take only the second box) = 1
E[money gained | you choose to take both boxes] = $1000 + 0 * $1e6 = $1000
E[money gained | you choose to take only the second box] = $1000 + 1 * $1e6 = $1001000
Are you sure that you need an advanced decision theory two handle the one-box/two-box problem, or the PD-with-mental-clone problem? You write that
Well, that’s a common situation analyzed in game theory, but it’s not essential to CDT. Consider playing a game of chess: your choice clearly affects the choice of your opponent. Or consider the decision of whether to punch a 6′5″, 250 lb. muscle-man who has just insulted you—your choice again has a strong influence on his choice of action. CDT is adequate for analyzing both of these situations.
It is true that in my two examples the other agent’s choice is made after X’s choice, rather than being simultaneous with his. But of what relevance is the stipulation of simultaneity? It’s only relevance is that it leads one to assume that the other decisions are independent of X’s decision! That is, the root of the difficulty is simply that you’re analyzing the problem using an assumption that you know to be false!
It seems to me that you can analyze the one-box/two-box problem or the PD-with-a-mental-clone problem perfectly well using CDT; you just have to use the right causal graph. The causal graph needs an arc from your decision to Omega’s prediction for the first problem, and an arc from your decision to the clone’s decision in the second problem. Then you do the usual maximization of expected utility.
Of course, in these two problems we know which causal links to draw. They were written to be simple enough. The trick is to have a general theory that draws the right links here without drawing wrong links in other problems, and which is formalizable so that it can answer problems more complicated than common sense can handle.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision- and you can certainly come up with examples where mutual ignorance happens.
Finally, situations with iterated moves can be decided differently by different decision theories as well: consider Newcomb’s Problem where the big box is transparent as well! A CDT will always find the big box empty, and two-box; a UDT/ADT will always find the big box full, and one-box. (TDT might two-box in that case, actually.)
If you don’t know that Omega’s decision depends on yours, or that the other player in a Prisoner’s Dilemma is your mental clone, then no theory can help you make the right choice; you lack the crucial piece of information. If you do know this information, then simply cranking through standard maximization of expected utility gives you the right answer.
No, the relevant distinction is whether or not your decision is relevant to predicting (postdicting?) the other agent’s decision. The cheat in Newcomb’s Problem and the PD-with-a-clone problem is this:
you create an unusual situation where X’s decision is clearly relevant to predicting Y’s decision, even though X’s decision does not precede Y’s,
then you insist that X must pretend that there is no connection, even though he knows better, due to the lack of temporal precedence.
Let’s take a look at what happens in Newcomb’s problem if we just grind through the math. We have
P(box 2 has $1 million | you choose to take both boxes) = 0
P(box 2 has $1 million | you choose to take only the second box) = 1
E[money gained | you choose to take both boxes] = $1000 + 0 * $1e6 = $1000
E[money gained | you choose to take only the second box] = $1000 + 1 * $1e6 = $1001000
So where’s the problem?
That’s evidential decision theory, which gives the wrong answer to the smoking lesion problem.