Puzzle piece 4: On their face, all calculations of counterfactual payoffs (“woulds”) involve asking questions about impossible worlds. It is not clear how to interpret such questions.
Determinism notwithstanding, it is tempting to interpret CSAs’ “woulds”—our U(a_i)s above—as calculating what “really would” happen, if they “were” somehow able to take each given action.
But if agent X will (deterministically) choose action a_1, then when he asks what would happen “if” he takes alternative action a_2, he’s asking what would happen if something impossible happens.
If X is to calculate the payoff “if he takes action a_2” as part of a causal world-model, he’ll need to choose some particular meaning of “if he takes action a_2” – some meaning that allows him to combine a model of himself taking action a_2 with the rest of his current picture of the world, without allowing predictions like “if I take action a_2, then the laws of physics will have been broken”.
Perhaps it’s already been said, but isn’t there a temporal problem in this reasoning?
A CSA, while in the process of making its decision, does not yet know what its decision will be. Therefore it can evaluate any number of “coulds”, figuring out and caching the “woulds” before choosing its action, without causing any logical quandaries.
While evaluating a “could” it assumes for the purposes of the evaluation that it has evaluated everything already and this “could” was the chosen action.
Or did I completely miss the point?
EDIT: IOW, there can’t be a counter-factual until a “factual” exists, and a “factual” won’t exist until the decision process has completed...
Therefore it can evaluate any number of “coulds”, figuring out and caching the “woulds” before choosing its action, without causing any logical quandaries.
In practice, it can; but formalizing this process requires formalizing logical uncertainty / impossible possible worlds, which is an unsolved problem.
Perhaps it’s already been said, but isn’t there a temporal problem in this reasoning?
A CSA, while in the process of making its decision, does not yet know what its decision will be. Therefore it can evaluate any number of “coulds”, figuring out and caching the “woulds” before choosing its action, without causing any logical quandaries.
While evaluating a “could” it assumes for the purposes of the evaluation that it has evaluated everything already and this “could” was the chosen action.
Or did I completely miss the point?
EDIT: IOW, there can’t be a counter-factual until a “factual” exists, and a “factual” won’t exist until the decision process has completed...
In practice, it can; but formalizing this process requires formalizing logical uncertainty / impossible possible worlds, which is an unsolved problem.