Of course, in these two problems we know which causal links to draw. They were written to be simple enough. The trick is to have a general theory that draws the right links here without drawing wrong links in other problems, and which is formalizable so that it can answer problems more complicated than common sense can handle.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision- and you can certainly come up with examples where mutual ignorance happens.
Finally, situations with iterated moves can be decided differently by different decision theories as well: consider Newcomb’s Problem where the big box is transparent as well! A CDT will always find the big box empty, and two-box; a UDT/ADT will always find the big box full, and one-box. (TDT might two-box in that case, actually.)
Of course, in these two problems we know which causal links to draw. [...] The trick is to have a general theory that draws the right links here without drawing wrong links in other problems,
If you don’t know that Omega’s decision depends on yours, or that the other player in a Prisoner’s Dilemma is your mental clone, then no theory can help you make the right choice; you lack the crucial piece of information. If you do know this information, then simply cranking through standard maximization of expected utility gives you the right answer.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision
No, the relevant distinction is whether or not your decision is relevant to predicting (postdicting?) the other agent’s decision. The cheat in Newcomb’s Problem and the PD-with-a-clone problem is this:
you create an unusual situation where X’s decision is clearly relevant to predicting Y’s decision, even though X’s decision does not precede Y’s,
then you insist that X must pretend that there is no connection, even though he knows better, due to the lack of temporal precedence.
Let’s take a look at what happens in Newcomb’s problem if we just grind through the math. We have
P(box 2 has $1 million | you choose to take both boxes) = 0
P(box 2 has $1 million | you choose to take only the second box) = 1
E[money gained | you choose to take both boxes] = $1000 + 0 * $1e6 = $1000
E[money gained | you choose to take only the second box] = $1000 + 1 * $1e6 = $1001000
Of course, in these two problems we know which causal links to draw. They were written to be simple enough. The trick is to have a general theory that draws the right links here without drawing wrong links in other problems, and which is formalizable so that it can answer problems more complicated than common sense can handle.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision- and you can certainly come up with examples where mutual ignorance happens.
Finally, situations with iterated moves can be decided differently by different decision theories as well: consider Newcomb’s Problem where the big box is transparent as well! A CDT will always find the big box empty, and two-box; a UDT/ADT will always find the big box full, and one-box. (TDT might two-box in that case, actually.)
If you don’t know that Omega’s decision depends on yours, or that the other player in a Prisoner’s Dilemma is your mental clone, then no theory can help you make the right choice; you lack the crucial piece of information. If you do know this information, then simply cranking through standard maximization of expected utility gives you the right answer.
No, the relevant distinction is whether or not your decision is relevant to predicting (postdicting?) the other agent’s decision. The cheat in Newcomb’s Problem and the PD-with-a-clone problem is this:
you create an unusual situation where X’s decision is clearly relevant to predicting Y’s decision, even though X’s decision does not precede Y’s,
then you insist that X must pretend that there is no connection, even though he knows better, due to the lack of temporal precedence.
Let’s take a look at what happens in Newcomb’s problem if we just grind through the math. We have
P(box 2 has $1 million | you choose to take both boxes) = 0
P(box 2 has $1 million | you choose to take only the second box) = 1
E[money gained | you choose to take both boxes] = $1000 + 0 * $1e6 = $1000
E[money gained | you choose to take only the second box] = $1000 + 1 * $1e6 = $1001000
So where’s the problem?
That’s evidential decision theory, which gives the wrong answer to the smoking lesion problem.