Counterfactuals don’t need to be about impossible things—and agents do calculate what would have happened, if something different had happened. And it is very hard to know whether it would have been possible for something different to happen.
The problem of counterfactuals is not actually a problem. Goodman’s book is riddled with nonsensical claims.
What can Pearl’s formalism accomplish, that earlier logics could not? As far as I can tell, “Bayes nets” just means that you’re going to make as many conditional-independence assumptions as you can, use an acyclic graph, and ignore time (or use a synchronous clock). But nothing changes about the logic.
What can Pearl’s formalism accomplish, that earlier logics could not? As far as I can tell, “Bayes nets” just means that you’re going to make as many conditional-independence assumptions as you can. But nothing changes about the logic.
The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened
...and this bit:
Recall that we seem to need counterfactuals in order to build agents that do useful decision theory—we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals.
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way—e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.
Counterfactuals don’t need to be about impossible things—and agents do calculate what would have happened, if something different had happened. And it is very hard to know whether it would have been possible for something different to happen.
The problem of counterfactuals is not actually a problem. Goodman’s book is riddled with nonsensical claims.
What can Pearl’s formalism accomplish, that earlier logics could not? As far as I can tell, “Bayes nets” just means that you’re going to make as many conditional-independence assumptions as you can, use an acyclic graph, and ignore time (or use a synchronous clock). But nothing changes about the logic.
I am not sure. I haven’t got much from Pearl so far. I did once try to go through The Art and Science of Cause and Effect—but it was pretty yawn-inducing.
I was replying to this bit in the post:
...and this bit:
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way—e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.