Those of a Bayesian leaning will tend to say things like “probability is subjective”, and claim this is an important insight into the nature of probability—one might even go so far as to say “probability is an answer, not a question”. But this doesn’t mean you can believe what you want; not exactly. There are coherence constraints. So, once we see that probability is subjective, we can then seek a theory of the subjectivity, which tells us “objective” information about it (yet which leaves a whole lot of flexibility).
The same might be true of counterfactuals. I personally lean toward the position that the constraints on counterfactuals are that they be consistent with evidential predictions, but I don’t claim to be unconfused. My position is a “counterfactuals are subjective but have significant coherence constraints” type position, but (arguably) a fairly minimal one—the constraint is a version of “counterfacting on what you actually did should yield what actually happened”, one of the most basic constraints on what counterfactuals should be.
On the other hand, my theory of counterfactuals is pretty boring and doesn’t directly solve problems—it more says “look elsewhere for the interesting stuff”.
Edit --
Oh, also, I wanted to pitch the idea that counterfactuals, like a whole bunch of things, should be thought of as “constructed rather than real”. This is subtly different from “subjective”. We humans are pretty far along in an ongoing process of figuring out how to be and act in the world. Sometimes we come up with formal theories of things like probability, utility, counterfactuals, and logic. The process of coming up with these formal theories informs our practice. Our practice also informs the formal theories. Sometimes a theory seems to capture what we wanted really nicely. My argument is that in an important sense we’ve invented, not discovered, what we wanted.
So, for example, utility functions. Do utility functions capture human preferences? No, not really, they are pretty far from preferences observed in the wild. However, we’re in the process of figuring out what we prefer. Utility functions capture some nice ideas about idealized preferences, so that when we’re talking about idealized versions of what we want (trying to figure out what we prefer upon reflection) it is (a) often pretty convenient to think in terms of utilities, and (b) somewhat difficult to really escape the framework of utilities. Similarly for probability and logic as formal models of idealized reasoning.
So, just as utility functions aren’t really out there in the world, counterfactuals aren’t really out there in the world. But just as it might be that we should think about our preferences in terms of utility anyway (...or maybe abandon utility in favor of better theoretical tools), we might want to equip our best world-model with counterfactuals anyway (...or abandon them in favor of better theoretical tools).
Hopefully I tie up my old job soon so that I can dive deeper into Agent Foundations, including your sequence on CDT=EDT.
Anyway, I’m slightly confused by your comment, because I get the impression you think there is more divergence between our ideas than I think exists. When you talk about it being constructed rather than real, it’s very similar to what i meant when I (briefly) noted that some definitions are more natural than others (https://www.lesswrong.com/posts/peCFP4zGowfe7Xccz/natural-structures-and-definitions).
It’s on this basis that I argue raw counterfactuals are particularly important, as opposed to before when I was arguing that all other definitions of counterfactuals need to be justified in terms of them.
Anyway, the next step for me will probably be to look at the notions of counterfactual that exist and try to see which ones, if any, aren’t ultimately relying on raw counterfactuals to justify their value.
“counterfacting on what you actually did should yield what actually happened”—what do you mean by this? I can think of one definition where this is pretty much trivial and another where it is essentially circular
Those of a Bayesian leaning will tend to say things like “probability is subjective”, and claim this is an important insight into the nature of probability—one might even go so far as to say “probability is an answer, not a question”. But this doesn’t mean you can believe what you want; not exactly. There are coherence constraints. So, once we see that probability is subjective, we can then seek a theory of the subjectivity, which tells us “objective” information about it (yet which leaves a whole lot of flexibility).
The same might be true of counterfactuals. I personally lean toward the position that the constraints on counterfactuals are that they be consistent with evidential predictions, but I don’t claim to be unconfused. My position is a “counterfactuals are subjective but have significant coherence constraints” type position, but (arguably) a fairly minimal one—the constraint is a version of “counterfacting on what you actually did should yield what actually happened”, one of the most basic constraints on what counterfactuals should be.
On the other hand, my theory of counterfactuals is pretty boring and doesn’t directly solve problems—it more says “look elsewhere for the interesting stuff”.
Edit --
Oh, also, I wanted to pitch the idea that counterfactuals, like a whole bunch of things, should be thought of as “constructed rather than real”. This is subtly different from “subjective”. We humans are pretty far along in an ongoing process of figuring out how to be and act in the world. Sometimes we come up with formal theories of things like probability, utility, counterfactuals, and logic. The process of coming up with these formal theories informs our practice. Our practice also informs the formal theories. Sometimes a theory seems to capture what we wanted really nicely. My argument is that in an important sense we’ve invented, not discovered, what we wanted.
So, for example, utility functions. Do utility functions capture human preferences? No, not really, they are pretty far from preferences observed in the wild. However, we’re in the process of figuring out what we prefer. Utility functions capture some nice ideas about idealized preferences, so that when we’re talking about idealized versions of what we want (trying to figure out what we prefer upon reflection) it is (a) often pretty convenient to think in terms of utilities, and (b) somewhat difficult to really escape the framework of utilities. Similarly for probability and logic as formal models of idealized reasoning.
So, just as utility functions aren’t really out there in the world, counterfactuals aren’t really out there in the world. But just as it might be that we should think about our preferences in terms of utility anyway (...or maybe abandon utility in favor of better theoretical tools), we might want to equip our best world-model with counterfactuals anyway (...or abandon them in favor of better theoretical tools).
Hopefully I tie up my old job soon so that I can dive deeper into Agent Foundations, including your sequence on CDT=EDT.
Anyway, I’m slightly confused by your comment, because I get the impression you think there is more divergence between our ideas than I think exists. When you talk about it being constructed rather than real, it’s very similar to what i meant when I (briefly) noted that some definitions are more natural than others (https://www.lesswrong.com/posts/peCFP4zGowfe7Xccz/natural-structures-and-definitions).
It’s on this basis that I argue raw counterfactuals are particularly important, as opposed to before when I was arguing that all other definitions of counterfactuals need to be justified in terms of them.
Anyway, the next step for me will probably be to look at the notions of counterfactual that exist and try to see which ones, if any, aren’t ultimately relying on raw counterfactuals to justify their value.
“counterfacting on what you actually did should yield what actually happened”—what do you mean by this? I can think of one definition where this is pretty much trivial and another where it is essentially circular