The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened
...and this bit:
Recall that we seem to need counterfactuals in order to build agents that do useful decision theory—we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals.
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way—e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.
I was replying to this bit in the post:
...and this bit:
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way—e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.