“And ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world”—ah, but a predictive model also requires counterfatuals.
No, prediction and counterfactuals share a common mechanism that is neutral between them.
Decision theory is about choosing possible courses of action according to their utility, which implies choosing them for, among other things, their probability. A future action is an event that has not happened yet. A past counterfactual is an event that didn’t happen.There’s a practical difference between the two, but they share a theoretical component.: “What would be the output given input Y”. Note how that verbal formulation gives no information about whether a future or state or a counterfactuals is being considered. The black box making the calculation doesn’t know whether the input its receiving represents something that will happen, or something that might have happened.
I’m puzzled that you are puzzled. JBlack’s analysis, which I completely agree with, shows how and why agents with limited information consider counterfactuals. What further problems are there? Even the issue of highly atypical agents with perfect knowledge doesn’t create that much of a problem, because they can just pretend to have less knowledge—build a simplified model—in order to expand the range of non contradictory possibilities.
“And ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world”—ah, but a predictive model also requires counterfatuals.
No, prediction and counterfactuals share a common mechanism that is neutral between them.
Decision theory is about choosing possible courses of action according to their utility, which implies choosing them for, among other things, their probability. A future action is an event that has not happened yet. A past counterfactual is an event that didn’t happen.There’s a practical difference between the two, but they share a theoretical component.: “What would be the output given input Y”. Note how that verbal formulation gives no information about whether a future or state or a counterfactuals is being considered. The black box making the calculation doesn’t know whether the input its receiving represents something that will happen, or something that might have happened.
I’m puzzled that you are puzzled. JBlack’s analysis, which I completely agree with, shows how and why agents with limited information consider counterfactuals. What further problems are there? Even the issue of highly atypical agents with perfect knowledge doesn’t create that much of a problem, because they can just pretend to have less knowledge—build a simplified model—in order to expand the range of non contradictory possibilities.