in my view of decision theory you don’t get to predict things like “the agent will randomize”
Why not? You surely agree that sometimes people can in fact predict such things. So your objection must be that it’s unfair when they do and that it’s not a strike against a decision theory if it causes you to get money-pumped in those situations. Well… why? Seems pretty bad to me, especially since some extremely high-stakes real-world situations our AIs might face will be of this type.
I see where you are coming from. But, I think the reason we are interested in CDT (for any DT) in the first place is because we want to know which one works best. However, if we allow the outcomes to be judged not just on the decision we make, but also on the process used to reach that decision then I don’t think we can learn anything useful.
Or, to put it from a different angle, IF the process P is used to reach decision X, but my “score” depends not just on X but also P then that can be mapped to a different problem where my decision is “P and X”, and I use some other process (P’) to decide which P to use.
For example, if a student on a maths paper is told they will be marked not just on the answer they give, but the working out they write on the paper—with points deducted for crossings outs or mistakes—we could easily imagine the student using other sheets of paper (or the inside of their head) to first work out the working they are going to show and the answer that goes with it. Here the decision problem “output” is the entire exame paper, not just the answer.
I don’t think I understand this yet, or maybe I don’t see how it’s a strong enough reason to reject my claims, e.g. my claim “If standard game theory has nothing to say about what to do in situations where you don’t have access to an unpredictable randomization mechanism, so much the worse for standard game theory, I say!”
I think we might be talking past each other. I will try and clarify what I meant.
Firstly, I fully agree with you that standard game theory should give you access to randomization mechanisms. I was just saying that I think that hypotheticals where you are judged on the process you use to decide, and not on your final decision are a bad way of working out which processes are good, because the hypothetical can just declare any process to be the one it rewards by fiat.
Related to the randomization mechanisms, in the kinds of problems people worry about with predictors guessing your actions in advance its very important to distinguish between [1] (pseudo-)randomization processes that the predictor can predict, and [2] ones that it cannot.
[1] Randomisation that can be predicted by the predictor is (I think) a completely uncontroversial resource to give agents in these problems. In this case we don’t need to make predictions like “the agent will randomise”, because we can instead make the stronger prediction “the agent will randomize, and the seed of their RNG is this, so they will take one box” which is just a longer way of saying “they will one box”. We don’t need the predictor to show its working by mentioning the RNG intermediate step.
[2] Randomisation that is beyond the predictor’s power is (I think) not the kind of thing that can sensibly be included in these thought experiments. We cannot simultaneously assume that the predictor is pretty good at predicting our actions and useless at predicting a random number generator we might use to choose our actions. The premises: “Alice has a perfect quantum random number generator that is completely beyond the power of Omega to predict. Alice uses this machine to make decisions. Omega can predict Alice’s decisions with 99% accuracy” are incoherent.
So I don’t see how randomization helps. The first kind, [1] doesn’t change anything, and the second kind [2], seems like it cannot be consistently combined with the premise of the question. Perfect predictors and perfect random number generators cannot exist in the same universe.
Their might be interesting nearby problems where you imagine the predictor is 100% effective at determining the agents algorithm, but because the agent has access to a perfect random number generator that it cannot predict their actions. Maybe this is what you meant? In this kind of situation I am still much happier with rules like “It will fill the box with gold if it knows their is a <50% chance of you picking it”, [the closest we can get to “outcomes not processes” in probabilistic land], (or perhaps the alternative “the probability that it fills the box with gold is one-minus the probability with which it predicts the agent will pick the box”.). But rules like “It will fill the box with gold if the agents process uses either randomisation or causal decision theory” seem unhelpful to me.
Sure, and sometimes people can predict things like “the agent will use UDT” and use that to punish the agent. But this kind of prediction is “unfair” because it doesn’t lead to an interesting decision theory—you can punish any decision theory that way. So to me the boundaries of “fair” and “unfair” are also partly about mathematical taste and promising-ness, not just what will lead to a better tank and such.
Right, that kind of prediction is unfair because it doesn’t lead to an interesting decision theory… but I asked why you don’t get to predict things like “the agent will randomize.” All sorts of interesting decision theory comes out of considering situations where you do get to predict such things. (Besides, such situations are important in real life.)
I might suggest “not interesting” rather than “not fair” as the complaint. One can image an Omega that leaves the box empty if the player is unpredictable, or if the player doesn’t rigorously follow CDT, or just always leaves it empty regardless. But there’s no intuition pump that it drives, and no analysis of why a formalization would or wouldn’t get the right answer.
When I’m in challenge-the-hypothetical mode, I defend CDT by making the agent believe Omega cheats. It’s a trick box that changes contents AFTER the agent chooses, BEFORE the contents are revealed. This is much higher probability to any rational agent than mind-reading or extreme predictability.
Why not? You surely agree that sometimes people can in fact predict such things. So your objection must be that it’s unfair when they do and that it’s not a strike against a decision theory if it causes you to get money-pumped in those situations. Well… why? Seems pretty bad to me, especially since some extremely high-stakes real-world situations our AIs might face will be of this type.
I see where you are coming from. But, I think the reason we are interested in CDT (for any DT) in the first place is because we want to know which one works best. However, if we allow the outcomes to be judged not just on the decision we make, but also on the process used to reach that decision then I don’t think we can learn anything useful.
Or, to put it from a different angle, IF the process P is used to reach decision X, but my “score” depends not just on X but also P then that can be mapped to a different problem where my decision is “P and X”, and I use some other process (P’) to decide which P to use.
For example, if a student on a maths paper is told they will be marked not just on the answer they give, but the working out they write on the paper—with points deducted for crossings outs or mistakes—we could easily imagine the student using other sheets of paper (or the inside of their head) to first work out the working they are going to show and the answer that goes with it. Here the decision problem “output” is the entire exame paper, not just the answer.
I don’t think I understand this yet, or maybe I don’t see how it’s a strong enough reason to reject my claims, e.g. my claim “If standard game theory has nothing to say about what to do in situations where you don’t have access to an unpredictable randomization mechanism, so much the worse for standard game theory, I say!”
I think we might be talking past each other. I will try and clarify what I meant.
Firstly, I fully agree with you that standard game theory should give you access to randomization mechanisms. I was just saying that I think that hypotheticals where you are judged on the process you use to decide, and not on your final decision are a bad way of working out which processes are good, because the hypothetical can just declare any process to be the one it rewards by fiat.
Related to the randomization mechanisms, in the kinds of problems people worry about with predictors guessing your actions in advance its very important to distinguish between [1] (pseudo-)randomization processes that the predictor can predict, and [2] ones that it cannot.
[1] Randomisation that can be predicted by the predictor is (I think) a completely uncontroversial resource to give agents in these problems. In this case we don’t need to make predictions like “the agent will randomise”, because we can instead make the stronger prediction “the agent will randomize, and the seed of their RNG is this, so they will take one box” which is just a longer way of saying “they will one box”. We don’t need the predictor to show its working by mentioning the RNG intermediate step.
[2] Randomisation that is beyond the predictor’s power is (I think) not the kind of thing that can sensibly be included in these thought experiments. We cannot simultaneously assume that the predictor is pretty good at predicting our actions and useless at predicting a random number generator we might use to choose our actions. The premises: “Alice has a perfect quantum random number generator that is completely beyond the power of Omega to predict. Alice uses this machine to make decisions. Omega can predict Alice’s decisions with 99% accuracy” are incoherent.
So I don’t see how randomization helps. The first kind, [1] doesn’t change anything, and the second kind [2], seems like it cannot be consistently combined with the premise of the question. Perfect predictors and perfect random number generators cannot exist in the same universe.
Their might be interesting nearby problems where you imagine the predictor is 100% effective at determining the agents algorithm, but because the agent has access to a perfect random number generator that it cannot predict their actions. Maybe this is what you meant? In this kind of situation I am still much happier with rules like “It will fill the box with gold if it knows their is a <50% chance of you picking it”, [the closest we can get to “outcomes not processes” in probabilistic land], (or perhaps the alternative “the probability that it fills the box with gold is one-minus the probability with which it predicts the agent will pick the box”.). But rules like “It will fill the box with gold if the agents process uses either randomisation or causal decision theory” seem unhelpful to me.
Sure, and sometimes people can predict things like “the agent will use UDT” and use that to punish the agent. But this kind of prediction is “unfair” because it doesn’t lead to an interesting decision theory—you can punish any decision theory that way. So to me the boundaries of “fair” and “unfair” are also partly about mathematical taste and promising-ness, not just what will lead to a better tank and such.
Right, that kind of prediction is unfair because it doesn’t lead to an interesting decision theory… but I asked why you don’t get to predict things like “the agent will randomize.” All sorts of interesting decision theory comes out of considering situations where you do get to predict such things. (Besides, such situations are important in real life.)
I might suggest “not interesting” rather than “not fair” as the complaint. One can image an Omega that leaves the box empty if the player is unpredictable, or if the player doesn’t rigorously follow CDT, or just always leaves it empty regardless. But there’s no intuition pump that it drives, and no analysis of why a formalization would or wouldn’t get the right answer.
When I’m in challenge-the-hypothetical mode, I defend CDT by making the agent believe Omega cheats. It’s a trick box that changes contents AFTER the agent chooses, BEFORE the contents are revealed. This is much higher probability to any rational agent than mind-reading or extreme predictability.