When you think about the problem this way, there are no counterfactuals, only state evolution. It can be applied to the past, to the present or to the future.
This doesn’t give very useful answers when the state evolution is nearly deterministic, such as an agent made of computer code.
For example, consider an agent trying to decide whether to turn left or turn right. Suppose for the sake of argument that it actually turns left, if you run physics forward. Also suppose that the logical uncertainty has figured that out, so that the best-estimate macrostate probabilities are mostly on that. Now, the agent considers whether to turn left or right.
Since the computation (as pure math) is deterministic, counterfactuals which result from supposing the state evolution went right instead of left mostly consist of computer glitches in which the hardware failed. This doesn’t seem like what the agent should be thinking about when it considers the alternative of going right instead of left. For example, the grocery store it is trying to get to could be on the right-hand path. The potential bad results of a hardware failure might outweigh the desire to turn toward the grocery store, so that the agent prefers to turn left.
For this story to make sense, the (logical) certainty that the abstract algorithm decides to turn left in this case has to be higher than the confidence that hardware will not fail, so that turning right seems likely to imply hardware failure. This can happen due to Löb’s theorem: the whole above argument, as a hypothetical argument, suggests that the agent would turn left on a particular occasion if it happened to prove ahead of time that its abstract algorithm would turn left (since it would then be certain that turning right implied a hardware failure). But this means a proof of left-turning results in left-turning. Löb’s theorem, left-turning is indeed provable.
The Newcomb’s-problem example you give also seems problematic. Again, if the agent’s algorithm is deterministic, it does basically one thing as long as the initial conditions are such that it is in Newcomb’s problem. So, essentially all of the uncertainty about the agent’s action is logical uncertainty. I’m not sure exactly what your intended notion of counterfactual is, but, I don’t see how reasoning about microstates helps the agent here.
This doesn’t give very useful answers when the state evolution is nearly deterministic, such as an agent made of computer code.
For example, consider an agent trying to decide whether to turn left or turn right. Suppose for the sake of argument that it actually turns left, if you run physics forward. Also suppose that the logical uncertainty has figured that out, so that the best-estimate macrostate probabilities are mostly on that. Now, the agent considers whether to turn left or right.
Since the computation (as pure math) is deterministic, counterfactuals which result from supposing the state evolution went right instead of left mostly consist of computer glitches in which the hardware failed. This doesn’t seem like what the agent should be thinking about when it considers the alternative of going right instead of left. For example, the grocery store it is trying to get to could be on the right-hand path. The potential bad results of a hardware failure might outweigh the desire to turn toward the grocery store, so that the agent prefers to turn left.
For this story to make sense, the (logical) certainty that the abstract algorithm decides to turn left in this case has to be higher than the confidence that hardware will not fail, so that turning right seems likely to imply hardware failure. This can happen due to Löb’s theorem: the whole above argument, as a hypothetical argument, suggests that the agent would turn left on a particular occasion if it happened to prove ahead of time that its abstract algorithm would turn left (since it would then be certain that turning right implied a hardware failure). But this means a proof of left-turning results in left-turning. Löb’s theorem, left-turning is indeed provable.
The Newcomb’s-problem example you give also seems problematic. Again, if the agent’s algorithm is deterministic, it does basically one thing as long as the initial conditions are such that it is in Newcomb’s problem. So, essentially all of the uncertainty about the agent’s action is logical uncertainty. I’m not sure exactly what your intended notion of counterfactual is, but, I don’t see how reasoning about microstates helps the agent here.