Second: If I’m understanding this correctly in my sleep deprived state, you’re actually working on the exact same problem that I am in one of my papers, except that we focus on causality/counterfactuals in logic systems which correspond to programming language semantics. I got stuck on similar problemss. I can send you a preprint if you PM me.
Overall, my take is that you’re getting stuck in superficial differences of formalisttm, which makes it much harder for sleep-deprived me to find the insight. E.g.: the desire to use graphs to represent proofs; with the proper conceptual toolkit, this “multiple-antecedent problem” is a non-issue. (Two solutions: (1) use hypergraphs [which we do in the current draft of the paper]; (2) join antecedents into one, as is done in proof categories.)
I’m indeed talking about a kind of counterfactual conditional, one that could apply to logical rather than causal dependencies.
Avoiding multiple antecedents isn’t just a matter of what my conceptual toolkit can handle; I could well have done two different types of nodes, that would have represented it just fine. However restricting inferences this way makes a lot of things easier. For example it means that all inferences to any point “after” C come only from propositions that have come through C. If an inference could have multiple antecedents, then there would be inferences that combine a premise derived from K with a premise in B, and its not clear if the separation can be kept here.
From the paper you linked… Their first definiton of the counterfactual (the one where the consequent can only be a simple formula) describes the causal counterfactual (well, the indeterminstic-but-not-propabilistic protocolls throw things off a bit, but we can ignore those), and the whole “tremble” analysis closely resembles causal decision theory. Now their concept of knowledge is interesting because its defined with respect to the ecosystem, and so is implicity knowledge of the other agents strategy, but the subsequent definition of belief rather kneecaps this. The problem that belief is supposed to solve is very similar to the rederivation problem I’m trying to get around, but its formulated in terms of model theory. This seems like a bad way to formulate it, because having a model is a holistic property of the formal system, and our counterfactual surgery is basically trying to break a few things in that system without destroying the whole. And indeed the way belief is defined is basically to always assume that no further deviation from the known strategy will occur, so its impossible to use the counterfactuals based on it to evaluate different strategies. Or applying it to “strategies” of logical derivation: If you try to evaluate “what of X wasnt derived by the system”, it’ll do normal logical derivations, then the first time it would derive X it derives non-X instead, and then it continues doing correct derivations and soon derives X and therefore contradiction.
Hi Bunthut,
First, I’ll encourage you to have a look at material on what I thought this post was going to be about from the title: https://en.wikipedia.org/wiki/Counterfactual_conditional . I know about this subject primarily from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.3738&rep=rep1&type=pdf (which is more concrete/mathematical than the Wikipedia article, as it’s written by computer scientists rather than philosophers).
Second: If I’m understanding this correctly in my sleep deprived state, you’re actually working on the exact same problem that I am in one of my papers, except that we focus on causality/counterfactuals in logic systems which correspond to programming language semantics. I got stuck on similar problemss. I can send you a preprint if you PM me.
Overall, my take is that you’re getting stuck in superficial differences of formalisttm, which makes it much harder for sleep-deprived me to find the insight. E.g.: the desire to use graphs to represent proofs; with the proper conceptual toolkit, this “multiple-antecedent problem” is a non-issue. (Two solutions: (1) use hypergraphs [which we do in the current draft of the paper]; (2) join antecedents into one, as is done in proof categories.)
Hello Darmani,
I’m indeed talking about a kind of counterfactual conditional, one that could apply to logical rather than causal dependencies.
Avoiding multiple antecedents isn’t just a matter of what my conceptual toolkit can handle; I could well have done two different types of nodes, that would have represented it just fine. However restricting inferences this way makes a lot of things easier. For example it means that all inferences to any point “after” C come only from propositions that have come through C. If an inference could have multiple antecedents, then there would be inferences that combine a premise derived from K with a premise in B, and its not clear if the separation can be kept here.
From the paper you linked… Their first definiton of the counterfactual (the one where the consequent can only be a simple formula) describes the causal counterfactual (well, the indeterminstic-but-not-propabilistic protocolls throw things off a bit, but we can ignore those), and the whole “tremble” analysis closely resembles causal decision theory. Now their concept of knowledge is interesting because its defined with respect to the ecosystem, and so is implicity knowledge of the other agents strategy, but the subsequent definition of belief rather kneecaps this. The problem that belief is supposed to solve is very similar to the rederivation problem I’m trying to get around, but its formulated in terms of model theory. This seems like a bad way to formulate it, because having a model is a holistic property of the formal system, and our counterfactual surgery is basically trying to break a few things in that system without destroying the whole. And indeed the way belief is defined is basically to always assume that no further deviation from the known strategy will occur, so its impossible to use the counterfactuals based on it to evaluate different strategies. Or applying it to “strategies” of logical derivation: If you try to evaluate “what of X wasnt derived by the system”, it’ll do normal logical derivations, then the first time it would derive X it derives non-X instead, and then it continues doing correct derivations and soon derives X and therefore contradiction.
PM is sent.