Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
Are you sure you’re not anthropomorphizing the decision procedure? If I actually run through the steps that it specifies in my head, I don’t see any place where it would say “why bother” or fail to do the prediction.
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
In any case, you shouldn’t know wrong things at any point. The trick is to be able to consider what’s going on without assuming (knowing) that you result from an actual choice.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
This doesn’t seem right. You update quite fine, in the sense that you’d prefer a strategy where observing utility-maximizer choose X leads you to conclude that X is the highest-utility choice, in the sense that all the subsequent actions are chosen as if it’s so.
Looking over this… maybe this is stupid, but… isn’t this sort of a use/mention issue?
When simulating “if I choose A6”, then simulate “THEN I would have believed A6 has higher EU”, without having to escalate that to “actual I (not simulated I) actually currently now believes A6 has higher EU”
Just don’t have a TDT agent consider the beliefs of the counterfactual simulated versions of itself it be a reliable authority on actual noncounterfactual reality.
Am I missing the point? Am I skimming over the hard part, or...?
That’s one possible approach. But then you have to define what exactly constitutes a “use” and what constitutes a “mention” with respect to inferring facts about the universe. Compare the crispness of Pearl’s counterfactuals to classical causal decision theory’s counterfactual distributions falling from heaven, and you’ll see why you want more formal rules saying which inferences you can carry out.
Seems to me that it ought be treatable as “perfectly ordinary”...
That is, if you run a simulation, there’s no reason to for you to believe the same things that the modeled beings believe, right? If one of the modeled beings happen to be a version of you that’s acting and believing in terms of a counterfactual that is the premise of the simulation, then… why would that automatically lead to you believing the same thing in the first place? If you simulate a piece of paper that has written upon it “1+1=3”, does that mean that you actually believe “1+1=3″? So if instead you simulate a version of yourself that gets confused and believes that “1+1=3”… well, that’s just a simulation. If there’s a risk of that escalating into your actual model of reality, that would suggest something is very wrong somewhere in how you set up a simulation in the first place, right?
ie, simulated you is allowed to make all the usual inferences from, well, other stuff in the simulated world. It’s just that actual you doesn’t get to automatically equate simulated you’s beliefs with actual you’s beliefs.
So allow the simulated version to make all the usual inferences. I don’t see why any restriction is needed other than the level separation, which doesn’t need to treat this issue as a special case.
ie, simulated you in the counterfactual in which A6 was chosen believes that, well, A6 is what the algorithm in question would choose as the best choice. So? You calmly observe/model the actions simulated you takes if it believes that and so on without having to actually believe that yourself. Then, once all the counterfactual modelings are done and you apply your utility function to each of those to determine their actual expected utility, thus finding that A7 produces the highest EU, you actually do A7.
It simply happens to be that most of the versions of you from the counterfactual models that arose in the process of doing the TDT computation had false beliefs about what the actual output of the computation actually is in actual reality.
Am I missing the point still, or...?
(wait… I’m understanding this issue to be something that you consider an unsolved issue in TDT and I’m saying “no, seems to me to be simple to make TDT do the right thing here. The Pearl style counterfactual stuff oughtn’t cause any problem here, no special cases, no forbidden inferences need to be hard coded here”, but now, looking at your comment, maybe you meant “This issue justifies TDT because TDT actually does the right thing here”, in which case there was no need for me to say any of this at all. :))
The belief that A6 is highest-utility must come from somewhere. Strategy that includes A6 is not guaranteed to be real (game semantics: winning, ludics: without a daemon), that is it’s not guaranteed to hold without assuming facts for no reason. The action of A6 is exactly such an assumption that is given no reason to be actually found in the strategy, and the activity of the decision-making algorithm is exactly in proving (implementing) one of the actions to be actually carried out. Of course, the fact that A6 is highest-utility may also be considered counterfactually, but then you are just doing something not directly related to proving this particular choice.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
Are you sure you’re not anthropomorphizing the decision procedure? If I actually run through the steps that it specifies in my head, I don’t see any place where it would say “why bother” or fail to do the prediction.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
In any case, you shouldn’t know wrong things at any point. The trick is to be able to consider what’s going on without assuming (knowing) that you result from an actual choice.
This doesn’t seem right. You update quite fine, in the sense that you’d prefer a strategy where observing utility-maximizer choose X leads you to conclude that X is the highest-utility choice, in the sense that all the subsequent actions are chosen as if it’s so.
Looking over this… maybe this is stupid, but… isn’t this sort of a use/mention issue?
When simulating “if I choose A6”, then simulate “THEN I would have believed A6 has higher EU”, without having to escalate that to “actual I (not simulated I) actually currently now believes A6 has higher EU”
Just don’t have a TDT agent consider the beliefs of the counterfactual simulated versions of itself it be a reliable authority on actual noncounterfactual reality.
Am I missing the point? Am I skimming over the hard part, or...?
That’s one possible approach. But then you have to define what exactly constitutes a “use” and what constitutes a “mention” with respect to inferring facts about the universe. Compare the crispness of Pearl’s counterfactuals to classical causal decision theory’s counterfactual distributions falling from heaven, and you’ll see why you want more formal rules saying which inferences you can carry out.
Seems to me that it ought be treatable as “perfectly ordinary”...
That is, if you run a simulation, there’s no reason to for you to believe the same things that the modeled beings believe, right? If one of the modeled beings happen to be a version of you that’s acting and believing in terms of a counterfactual that is the premise of the simulation, then… why would that automatically lead to you believing the same thing in the first place? If you simulate a piece of paper that has written upon it “1+1=3”, does that mean that you actually believe “1+1=3″? So if instead you simulate a version of yourself that gets confused and believes that “1+1=3”… well, that’s just a simulation. If there’s a risk of that escalating into your actual model of reality, that would suggest something is very wrong somewhere in how you set up a simulation in the first place, right?
ie, simulated you is allowed to make all the usual inferences from, well, other stuff in the simulated world. It’s just that actual you doesn’t get to automatically equate simulated you’s beliefs with actual you’s beliefs.
So allow the simulated version to make all the usual inferences. I don’t see why any restriction is needed other than the level separation, which doesn’t need to treat this issue as a special case.
ie, simulated you in the counterfactual in which A6 was chosen believes that, well, A6 is what the algorithm in question would choose as the best choice. So? You calmly observe/model the actions simulated you takes if it believes that and so on without having to actually believe that yourself. Then, once all the counterfactual modelings are done and you apply your utility function to each of those to determine their actual expected utility, thus finding that A7 produces the highest EU, you actually do A7.
It simply happens to be that most of the versions of you from the counterfactual models that arose in the process of doing the TDT computation had false beliefs about what the actual output of the computation actually is in actual reality.
Am I missing the point still, or...?
(wait… I’m understanding this issue to be something that you consider an unsolved issue in TDT and I’m saying “no, seems to me to be simple to make TDT do the right thing here. The Pearl style counterfactual stuff oughtn’t cause any problem here, no special cases, no forbidden inferences need to be hard coded here”, but now, looking at your comment, maybe you meant “This issue justifies TDT because TDT actually does the right thing here”, in which case there was no need for me to say any of this at all. :))
The belief that A6 is highest-utility must come from somewhere. Strategy that includes A6 is not guaranteed to be real (game semantics: winning, ludics: without a daemon), that is it’s not guaranteed to hold without assuming facts for no reason. The action of A6 is exactly such an assumption that is given no reason to be actually found in the strategy, and the activity of the decision-making algorithm is exactly in proving (implementing) one of the actions to be actually carried out. Of course, the fact that A6 is highest-utility may also be considered counterfactually, but then you are just doing something not directly related to proving this particular choice.
Sorry, I’m not sure I follow what you’re saying.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Or am I utterly missing the point here?