I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Sorry, I’m not sure I follow what you’re saying.
I meant when dealing with the logical uncertainty of not yet knowing the outcome of the calculation that your decision process consists of, and counterfactually modelling each of the outcomes it “could” output, then when modeling the results of your own actions/beliefs as a result of that, simply don’t escalate that from a model of you to, well, actually you. The simulated you that conditions based on you (counterfactually) having decided A6 would presumably believe A6 has higher utility. So? You, who are also running the simulation for if you had chosen A7, etc etc, would compare and conclude that A7 has highest utility, even though simulated you believes (incorrectly) A6. Just keep separate levels, don’t do use/mention style errors, and (near as I can tell) there wouldn’t be a problem.
Or am I utterly missing the point here?