Thanks! Good insights there. Am reproducing the comment here for people less willing to click through:
I haven’t read the literature on “how counterfactuals ought to work in ideal reasoners” and have no opinion there. But the part where you suggest an empirical description of counterfactual reasoning in humans, I think I basically agree with what you wrote.
I think the neocortex has a zoo of generative models, and a fast way of detecting when two are compatible, and if they are, snapping them together like Legos into a larger model.
For example, the model of “falling” is incompatible with the model of “stationary”—they make contradictory predictions about the same boolean variables—and therefore I can’t imagine a “falling stationary rock”. On the other hand, I can imagine “a rubber wine glass spinning” because my rubber model is about texture etc., my wine glass model is about shape and function, and my spinning model is about motion. All 3 of those models make non-contradictory predictions (mostly because they’re issuing predictions about non-overlapping sets of variables), so the three can snap together into a larger generative model.
So for counterfactuals, I suppose that we start by hypothesizing some core of a model (“a bird the size of an adult blue whale”) and then searching out more little generative model pieces that can snap onto that core, growing it out as much as possible in different ways, until you hit the limits where you can’t snap on any more details without making it unacceptably self-contradictory. Something like that...
Sorta related: my comment here
Thanks! Good insights there. Am reproducing the comment here for people less willing to click through: