Wei, if you want to calculate the consequence of an action, you need to know that this computation outputting A1 has something do with box B containing a million dollars (and being obtained by you, for that matter) or that A2 has something to do with the driver in Parfit’s Hitchhiker deciding to pick you up and take you to the city. (And yet hypothetically choosing A6 is not used to infer, inside the counterfactual, that A6 actually was better than A7.)
This is what I am saying would get computed via the causal graphs, and which may require actual counterfactual surgery a la Pearl—at least the part where you don’t believe that A6 actually was better than A7 or that (hypothetically) deciding to cross the road makes it safe—though you may not need to recompute Parfit’s Hitchhiker, since this is an updateless decision theory to begin with.
I’m afraid I don’t understand you. Can you look at my solution to Drescher’s problem and point out which part is wrong or problematic? Or give a sample problem that UDT1 can’t deal with because it doesn’t use causal graphs?
Last time I tried to read Pearl’s book, I didn’t get very far. I’ll try again if given sufficient motivation. I guess you can either explain to me some more about what problem it solves, or I can just take your word for it, if you think it’s really a necessary component for UDT, and I’ll understand that after I comprehend Pearl.
We’re taking apart your “mathematical intuition” into something that invents a causal graph (this part is still magic) and a part that updates a causal graph “given that your output is Y” (Pearl says how to do this).
If you literally have the ability to run all of reality excluding yourself as a computer program, I suppose the causal graph part might be moot, since you could just simulate elementary particles directly, instead of approximating them with a high-level causal model. But then it’s not clear how to literally simulate out the whole universe in perfect detail when the inside of your computer is casting gravitational influences outward based on transistors whose exact value you haven’t yet computed (since you can’t compute all of yourself in advance of computing yourself!).
With different physics and a perfect Cartesian embedding (a la AIXI) you could do this, perhaps. With a perfect Cartesian embedding and knowledge of the rest of the universe outside yourself, there would be no need for causal graphs of any sort within the theory, I think. But you would still have to factor out your logical uncertainty in a way which prevented you from concluding “if I choose A6, it must have had higher utility than A7” when considering A6 as an option (as Drescher observes). After all, if you suffered a brief bout of amnesia afterward, and I told you with trustworthy authority that you really had chosen A6, you would conclude that you really must have calculated higher expected utility for it relative to your probability distribution and utility function.
If I believably tell you that Lee Harvey Oswald really didn’t shoot JFK, you conclude that someone else did. But in the counterfactual on our standard causal model, if LHO hadn’t shot JFK, no one else would have. So when postulating that your output is A6 inside the decision function, you’ve got to avoid certain conclusions that you would in fact come to, if you observed in reality that your output really was A6, like A6 having higher expected utility than A7. This sort of thing is the domain of causal graphs, which is why I’m assuming that the base model is a causal graph with some logical uncertainty in it. Perhaps you could come up with a similar but non-causal formalism for pure logical uncertainty, and then this would be very interesting.
Eliezer, one of your more recent comments finally prodded me into reading http://bayes.cs.ucla.edu/IJCAI99/ijcai-99.pdf (don’t know why I waited so long), and I can now understand this comment much better. Except this part:
But you would still have to factor out your logical uncertainty in a way which prevented you from concluding “if I choose A6, it must have had higher utility than A7” when considering A6 as an option (as Drescher observes).
Under UDT1, when I’m trying to predict the consequences of choosing A6, I do want to assume that it has higher expected utility than A7. Because suppose my prediction subroutine sees that there will be another agent who is very similar to me, about to make the same decision, it should predict that it will also choose A6, right?
Now when the prediction subroutine returns, that assumption pops off the stack and goes away. I then call my utility evaluation routine to compute a utility for those predictions. There is no place for me to conclude “if I choose A6, it must have had higher utility than A7” in a form that would cause any problems.
Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
Are you sure you’re not anthropomorphizing the decision procedure? If I actually run through the steps that it specifies in my head, I don’t see any place where it would say “why bother” or fail to do the prediction.
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
In any case, you shouldn’t know wrong things at any point. The trick is to be able to consider what’s going on without assuming (knowing) that you result from an actual choice.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
This doesn’t seem right. You update quite fine, in the sense that you’d prefer a strategy where observing utility-maximizer choose X leads you to conclude that X is the highest-utility choice, in the sense that all the subsequent actions are chosen as if it’s so.
Looking over this… maybe this is stupid, but… isn’t this sort of a use/mention issue?
When simulating “if I choose A6”, then simulate “THEN I would have believed A6 has higher EU”, without having to escalate that to “actual I (not simulated I) actually currently now believes A6 has higher EU”
Just don’t have a TDT agent consider the beliefs of the counterfactual simulated versions of itself it be a reliable authority on actual noncounterfactual reality.
Am I missing the point? Am I skimming over the hard part, or...?
That’s one possible approach. But then you have to define what exactly constitutes a “use” and what constitutes a “mention” with respect to inferring facts about the universe. Compare the crispness of Pearl’s counterfactuals to classical causal decision theory’s counterfactual distributions falling from heaven, and you’ll see why you want more formal rules saying which inferences you can carry out.
Seems to me that it ought be treatable as “perfectly ordinary”...
That is, if you run a simulation, there’s no reason to for you to believe the same things that the modeled beings believe, right? If one of the modeled beings happen to be a version of you that’s acting and believing in terms of a counterfactual that is the premise of the simulation, then… why would that automatically lead to you believing the same thing in the first place? If you simulate a piece of paper that has written upon it “1+1=3”, does that mean that you actually believe “1+1=3″? So if instead you simulate a version of yourself that gets confused and believes that “1+1=3”… well, that’s just a simulation. If there’s a risk of that escalating into your actual model of reality, that would suggest something is very wrong somewhere in how you set up a simulation in the first place, right?
ie, simulated you is allowed to make all the usual inferences from, well, other stuff in the simulated world. It’s just that actual you doesn’t get to automatically equate simulated you’s beliefs with actual you’s beliefs.
So allow the simulated version to make all the usual inferences. I don’t see why any restriction is needed other than the level separation, which doesn’t need to treat this issue as a special case.
ie, simulated you in the counterfactual in which A6 was chosen believes that, well, A6 is what the algorithm in question would choose as the best choice. So? You calmly observe/model the actions simulated you takes if it believes that and so on without having to actually believe that yourself. Then, once all the counterfactual modelings are done and you apply your utility function to each of those to determine their actual expected utility, thus finding that A7 produces the highest EU, you actually do A7.
It simply happens to be that most of the versions of you from the counterfactual models that arose in the process of doing the TDT computation had false beliefs about what the actual output of the computation actually is in actual reality.
Am I missing the point still, or...?
(wait… I’m understanding this issue to be something that you consider an unsolved issue in TDT and I’m saying “no, seems to me to be simple to make TDT do the right thing here. The Pearl style counterfactual stuff oughtn’t cause any problem here, no special cases, no forbidden inferences need to be hard coded here”, but now, looking at your comment, maybe you meant “This issue justifies TDT because TDT actually does the right thing here”, in which case there was no need for me to say any of this at all. :))
The belief that A6 is highest-utility must come from somewhere. Strategy that includes A6 is not guaranteed to be real (game semantics: winning, ludics: without a daemon), that is it’s not guaranteed to hold without assuming facts for no reason. The action of A6 is exactly such an assumption that is given no reason to be actually found in the strategy, and the activity of the decision-making algorithm is exactly in proving (implementing) one of the actions to be actually carried out. Of course, the fact that A6 is highest-utility may also be considered counterfactually, but then you are just doing something not directly related to proving this particular choice.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Remember the counterfactual zombie principle: you are only implication, your decision or your knowledge only says what it would be if you exist, but you can’t assume that you do exist.
When you counterfactual-consider A6, you consider how the world-with-A6 will be, but don’t assume that it exists, and so can’t infer that it’s of highest utility. You are right that your copy in world-with-A6 would also choose A6, but that also doesn’t have to be an action of maximum utility, since it’s not guaranteed the situation will exist. For the action that you do choose, you may know that you’ve chosen it, but for the action you counterfactually-consider, you don’t assume that you do choose it. (In causal networks, this seems to correspond to cutting off the action-node from yourself before setting it to a value.)
But then it’s not clear how to literally simulate out the whole universe in perfect detail when the inside of your computer is casting gravitational influences outward based on transistors whose exact value you haven’t yet computed (since you can’t compute all of yourself in advance of computing yourself!).
Somewhat tangentially, this is a way to grok how the information-processing capabilities of markets are computationally intractable to simulate (or predict their outputs via experts).
Wei, if you want to calculate the consequence of an action, you need to know that this computation outputting A1 has something do with box B containing a million dollars (and being obtained by you, for that matter) or that A2 has something to do with the driver in Parfit’s Hitchhiker deciding to pick you up and take you to the city. (And yet hypothetically choosing A6 is not used to infer, inside the counterfactual, that A6 actually was better than A7.)
This is what I am saying would get computed via the causal graphs, and which may require actual counterfactual surgery a la Pearl—at least the part where you don’t believe that A6 actually was better than A7 or that (hypothetically) deciding to cross the road makes it safe—though you may not need to recompute Parfit’s Hitchhiker, since this is an updateless decision theory to begin with.
I’m afraid I don’t understand you. Can you look at my solution to Drescher’s problem and point out which part is wrong or problematic? Or give a sample problem that UDT1 can’t deal with because it doesn’t use causal graphs?
Last time I tried to read Pearl’s book, I didn’t get very far. I’ll try again if given sufficient motivation. I guess you can either explain to me some more about what problem it solves, or I can just take your word for it, if you think it’s really a necessary component for UDT, and I’ll understand that after I comprehend Pearl.
We’re taking apart your “mathematical intuition” into something that invents a causal graph (this part is still magic) and a part that updates a causal graph “given that your output is Y” (Pearl says how to do this).
If you literally have the ability to run all of reality excluding yourself as a computer program, I suppose the causal graph part might be moot, since you could just simulate elementary particles directly, instead of approximating them with a high-level causal model. But then it’s not clear how to literally simulate out the whole universe in perfect detail when the inside of your computer is casting gravitational influences outward based on transistors whose exact value you haven’t yet computed (since you can’t compute all of yourself in advance of computing yourself!).
With different physics and a perfect Cartesian embedding (a la AIXI) you could do this, perhaps. With a perfect Cartesian embedding and knowledge of the rest of the universe outside yourself, there would be no need for causal graphs of any sort within the theory, I think. But you would still have to factor out your logical uncertainty in a way which prevented you from concluding “if I choose A6, it must have had higher utility than A7” when considering A6 as an option (as Drescher observes). After all, if you suffered a brief bout of amnesia afterward, and I told you with trustworthy authority that you really had chosen A6, you would conclude that you really must have calculated higher expected utility for it relative to your probability distribution and utility function.
If I believably tell you that Lee Harvey Oswald really didn’t shoot JFK, you conclude that someone else did. But in the counterfactual on our standard causal model, if LHO hadn’t shot JFK, no one else would have. So when postulating that your output is A6 inside the decision function, you’ve got to avoid certain conclusions that you would in fact come to, if you observed in reality that your output really was A6, like A6 having higher expected utility than A7. This sort of thing is the domain of causal graphs, which is why I’m assuming that the base model is a causal graph with some logical uncertainty in it. Perhaps you could come up with a similar but non-causal formalism for pure logical uncertainty, and then this would be very interesting.
Eliezer, one of your more recent comments finally prodded me into reading http://bayes.cs.ucla.edu/IJCAI99/ijcai-99.pdf (don’t know why I waited so long), and I can now understand this comment much better. Except this part:
Under UDT1, when I’m trying to predict the consequences of choosing A6, I do want to assume that it has higher expected utility than A7. Because suppose my prediction subroutine sees that there will be another agent who is very similar to me, about to make the same decision, it should predict that it will also choose A6, right?
Now when the prediction subroutine returns, that assumption pops off the stack and goes away. I then call my utility evaluation routine to compute a utility for those predictions. There is no place for me to conclude “if I choose A6, it must have had higher utility than A7” in a form that would cause any problems.
Am I missing something here?
Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
Are you sure you’re not anthropomorphizing the decision procedure? If I actually run through the steps that it specifies in my head, I don’t see any place where it would say “why bother” or fail to do the prediction.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
In any case, you shouldn’t know wrong things at any point. The trick is to be able to consider what’s going on without assuming (knowing) that you result from an actual choice.
This doesn’t seem right. You update quite fine, in the sense that you’d prefer a strategy where observing utility-maximizer choose X leads you to conclude that X is the highest-utility choice, in the sense that all the subsequent actions are chosen as if it’s so.
Looking over this… maybe this is stupid, but… isn’t this sort of a use/mention issue?
When simulating “if I choose A6”, then simulate “THEN I would have believed A6 has higher EU”, without having to escalate that to “actual I (not simulated I) actually currently now believes A6 has higher EU”
Just don’t have a TDT agent consider the beliefs of the counterfactual simulated versions of itself it be a reliable authority on actual noncounterfactual reality.
Am I missing the point? Am I skimming over the hard part, or...?
That’s one possible approach. But then you have to define what exactly constitutes a “use” and what constitutes a “mention” with respect to inferring facts about the universe. Compare the crispness of Pearl’s counterfactuals to classical causal decision theory’s counterfactual distributions falling from heaven, and you’ll see why you want more formal rules saying which inferences you can carry out.
Seems to me that it ought be treatable as “perfectly ordinary”...
That is, if you run a simulation, there’s no reason to for you to believe the same things that the modeled beings believe, right? If one of the modeled beings happen to be a version of you that’s acting and believing in terms of a counterfactual that is the premise of the simulation, then… why would that automatically lead to you believing the same thing in the first place? If you simulate a piece of paper that has written upon it “1+1=3”, does that mean that you actually believe “1+1=3″? So if instead you simulate a version of yourself that gets confused and believes that “1+1=3”… well, that’s just a simulation. If there’s a risk of that escalating into your actual model of reality, that would suggest something is very wrong somewhere in how you set up a simulation in the first place, right?
ie, simulated you is allowed to make all the usual inferences from, well, other stuff in the simulated world. It’s just that actual you doesn’t get to automatically equate simulated you’s beliefs with actual you’s beliefs.
So allow the simulated version to make all the usual inferences. I don’t see why any restriction is needed other than the level separation, which doesn’t need to treat this issue as a special case.
ie, simulated you in the counterfactual in which A6 was chosen believes that, well, A6 is what the algorithm in question would choose as the best choice. So? You calmly observe/model the actions simulated you takes if it believes that and so on without having to actually believe that yourself. Then, once all the counterfactual modelings are done and you apply your utility function to each of those to determine their actual expected utility, thus finding that A7 produces the highest EU, you actually do A7.
It simply happens to be that most of the versions of you from the counterfactual models that arose in the process of doing the TDT computation had false beliefs about what the actual output of the computation actually is in actual reality.
Am I missing the point still, or...?
(wait… I’m understanding this issue to be something that you consider an unsolved issue in TDT and I’m saying “no, seems to me to be simple to make TDT do the right thing here. The Pearl style counterfactual stuff oughtn’t cause any problem here, no special cases, no forbidden inferences need to be hard coded here”, but now, looking at your comment, maybe you meant “This issue justifies TDT because TDT actually does the right thing here”, in which case there was no need for me to say any of this at all. :))
The belief that A6 is highest-utility must come from somewhere. Strategy that includes A6 is not guaranteed to be real (game semantics: winning, ludics: without a daemon), that is it’s not guaranteed to hold without assuming facts for no reason. The action of A6 is exactly such an assumption that is given no reason to be actually found in the strategy, and the activity of the decision-making algorithm is exactly in proving (implementing) one of the actions to be actually carried out. Of course, the fact that A6 is highest-utility may also be considered counterfactually, but then you are just doing something not directly related to proving this particular choice.
Sorry, I’m not sure I follow what you’re saying.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Or am I utterly missing the point here?
Remember the counterfactual zombie principle: you are only implication, your decision or your knowledge only says what it would be if you exist, but you can’t assume that you do exist.
When you counterfactual-consider A6, you consider how the world-with-A6 will be, but don’t assume that it exists, and so can’t infer that it’s of highest utility. You are right that your copy in world-with-A6 would also choose A6, but that also doesn’t have to be an action of maximum utility, since it’s not guaranteed the situation will exist. For the action that you do choose, you may know that you’ve chosen it, but for the action you counterfactually-consider, you don’t assume that you do choose it. (In causal networks, this seems to correspond to cutting off the action-node from yourself before setting it to a value.)
Somewhat tangentially, this is a way to grok how the information-processing capabilities of markets are computationally intractable to simulate (or predict their outputs via experts).
Thanks, that’s helpful.