The belief that A6 is highest-utility must come from somewhere. Strategy that includes A6 is not guaranteed to be real (game semantics: winning, ludics: without a daemon), that is it’s not guaranteed to hold without assuming facts for no reason. The action of A6 is exactly such an assumption that is given no reason to be actually found in the strategy, and the activity of the decision-making algorithm is exactly in proving (implementing) one of the actions to be actually carried out. Of course, the fact that A6 is highest-utility may also be considered counterfactually, but then you are just doing something not directly related to proving this particular choice.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
The belief that A6 is highest-utility must come from somewhere. Strategy that includes A6 is not guaranteed to be real (game semantics: winning, ludics: without a daemon), that is it’s not guaranteed to hold without assuming facts for no reason. The action of A6 is exactly such an assumption that is given no reason to be actually found in the strategy, and the activity of the decision-making algorithm is exactly in proving (implementing) one of the actions to be actually carried out. Of course, the fact that A6 is highest-utility may also be considered counterfactually, but then you are just doing something not directly related to proving this particular choice.
Sorry, I’m not sure I follow what you’re saying.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Or am I utterly missing the point here?