I don’t understand the need for this “advanced” decision theory. The situations you mention—Omega and the boxes, PD with a mental clone—are highly artificial; no human being has ever encountered such a situation. So what relevance do these “advanced” decision theories have to decisions of real people in the real world?
They’re no more artificial than the rest of Game Theory- no human being has ever known their exact payoffs for consequences in terms of utility, either. Like I said, there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that’s something that CDT analysis would treat as irrational even when beneficial.
One bit of relevance is that “rational” has been wrongly conflated with strategies akin to defecting in the Prisoner’s Dilemma, or being unable to geniunely promise anything with high enough stakes, and advanced decision theories are the key to seeing that the rational ideal doesn’t fail like that.
They’re no more artificial than the rest of Game Theory-
That’s an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time… but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.
there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that’s something that CDT analysis would treat as irrational even when beneficial.
That doesn’t follow from anything said in the article. Care to explain further?
One bit of relevance is that “rational” has been wrongly conflated with strategies akin to defecting in the Prisoner’s Dilemma,
Defecting is the right thing to do in the Prisoner’s Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player’s decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
A simple expected utility maximization does. A CDT decision doesn’t. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an “advanced” decision theory?
From Wikipedia: “Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences.”
It seems to me that the source of the problem is in that phrase “causal consequences”, and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It’s worth mentioning that you can turn Pearl’s causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions.
This suggests to me that causality really isn’t a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term “model” here just refers to the joint probability distribution you use to represent your state of information.]
Where I’m getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb’s Paradox and the Cloned Prisoner’s Dilemma are easily resolved.
I think I’m going to have to write this up as an article of my own to really explain myself...
Game/decision theory is a mathematical discipline. It doesn’t get much more artifical than that. The fact that it is somewhat applicable to reality is an interesting side effect.
If your goal is to figure out what to have for breakfast, not much relevance at all. If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot. If your goal is to program an automated decision-making system to figure out how to optimize all available resources for the maximum benefit of humanity, perhaps even more.
There are lots of groups represented on LW, with different perceived needs. Some are primarily interested in self-help threads, others primarily interested in academic decision-theory threads, and many others. Easiest is to ignore threads that don’t interest you.
If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot.
This example has nothing like the character of the one-box/two-box problem or the PD-with-mental-clone problem described in the article. Why should it require an “advanced” decision theory? Because people’s consumption will respond to the supplies made available? But standard game theory can handle that.
There are lots of groups represented on LW, with different perceived needs. [...]Easiest is to ignore threads that don’t interest you.
It’s not that I’m not interested; it’s that I’m puzzled as to what possible use these “advanced” decision theories can ever have to anyone.
OK, ignore those examples for a second, and ignore the word “advanced.”
The OP is drawing a distinction between CDT, which he claims fails in situations where competing agents can predict one another’s behavior to varying degrees, and other decision theories, which don’t fail. If he’s wrong in that claim, then articulating why would be helpful.
If, instead, he’s right in that claim, then I don’t see what’s useless about theories that don’t fail in that situation. At least, it certainly seems to me that competing agents predicting one another’s behavior is something that happens all the time in the real world. Does it not seem that way to you?
But the basic assumption of standard game theory, which I presume he means to include in CDT, is that the agents can predict each other’s behavior—it is assumed that each will make the best move they possibly can.
I don’t think that predicting behavior is the fundamental distinction here. Game theory is all about dealing with intelligent actors who are trying to anticipate your own choices. That’s why the Nash equilibrium is generally a probabilistic strategy—to make your move unpredictable.
But the basic assumption of standard game theory, which I presume he means to include in CDT, is that the agents can predict each other’s behavior—it is assumed that each will make the best move they possibly can.
Not quite. A unique Nash equilibrium is an un-exploitable strategy; you don’t need to predict what the other agents will do, because the worst expected utility for you is if they also pick the equilibrium. If they depart, you can often profit.
Non-unique Nash equilibria (like the coordination game) are a classical game theory problem without a general solution.
Classical game theory uses the axiom of independence to avoid having to predict other agents in detail. The point of the advanced decision theories is that we can sometimes do better than that outcome if independence is in fact violated.
I don’t understand the need for this “advanced” decision theory. The situations you mention—Omega and the boxes, PD with a mental clone—are highly artificial; no human being has ever encountered such a situation. So what relevance do these “advanced” decision theories have to decisions of real people in the real world?
They’re no more artificial than the rest of Game Theory- no human being has ever known their exact payoffs for consequences in terms of utility, either. Like I said, there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that’s something that CDT analysis would treat as irrational even when beneficial.
One bit of relevance is that “rational” has been wrongly conflated with strategies akin to defecting in the Prisoner’s Dilemma, or being unable to geniunely promise anything with high enough stakes, and advanced decision theories are the key to seeing that the rational ideal doesn’t fail like that.
That’s an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time… but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.
That doesn’t follow from anything said in the article. Care to explain further?
Defecting is the right thing to do in the Prisoner’s Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player’s decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
A simple expected utility maximization does. A CDT decision doesn’t. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an “advanced” decision theory?
From Wikipedia: “Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences.”
It seems to me that the source of the problem is in that phrase “causal consequences”, and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It’s worth mentioning that you can turn Pearl’s causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions. This suggests to me that causality really isn’t a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term “model” here just refers to the joint probability distribution you use to represent your state of information.]
Where I’m getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb’s Paradox and the Cloned Prisoner’s Dilemma are easily resolved.
I think I’m going to have to write this up as an article of my own to really explain myself...
See my comment here—though if this problem keeps coming up then a post should be written by someone I guess.
Game/decision theory is a mathematical discipline. It doesn’t get much more artifical than that. The fact that it is somewhat applicable to reality is an interesting side effect.
If your goal is to figure out what to have for breakfast, not much relevance at all.
If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot.
If your goal is to program an automated decision-making system to figure out how to optimize all available resources for the maximum benefit of humanity, perhaps even more.
There are lots of groups represented on LW, with different perceived needs. Some are primarily interested in self-help threads, others primarily interested in academic decision-theory threads, and many others. Easiest is to ignore threads that don’t interest you.
This example has nothing like the character of the one-box/two-box problem or the PD-with-mental-clone problem described in the article. Why should it require an “advanced” decision theory? Because people’s consumption will respond to the supplies made available? But standard game theory can handle that.
It’s not that I’m not interested; it’s that I’m puzzled as to what possible use these “advanced” decision theories can ever have to anyone.
OK, ignore those examples for a second, and ignore the word “advanced.”
The OP is drawing a distinction between CDT, which he claims fails in situations where competing agents can predict one another’s behavior to varying degrees, and other decision theories, which don’t fail. If he’s wrong in that claim, then articulating why would be helpful.
If, instead, he’s right in that claim, then I don’t see what’s useless about theories that don’t fail in that situation. At least, it certainly seems to me that competing agents predicting one another’s behavior is something that happens all the time in the real world. Does it not seem that way to you?
But the basic assumption of standard game theory, which I presume he means to include in CDT, is that the agents can predict each other’s behavior—it is assumed that each will make the best move they possibly can.
I don’t think that predicting behavior is the fundamental distinction here. Game theory is all about dealing with intelligent actors who are trying to anticipate your own choices. That’s why the Nash equilibrium is generally a probabilistic strategy—to make your move unpredictable.
Not quite. A unique Nash equilibrium is an un-exploitable strategy; you don’t need to predict what the other agents will do, because the worst expected utility for you is if they also pick the equilibrium. If they depart, you can often profit.
Non-unique Nash equilibria (like the coordination game) are a classical game theory problem without a general solution.
Classical game theory uses the axiom of independence to avoid having to predict other agents in detail. The point of the advanced decision theories is that we can sometimes do better than that outcome if independence is in fact violated.
I’m not sure that equating “CDT” with “standard game theory” as you reference it here is preserving the OP’s point.