I’m indeed talking about a kind of counterfactual conditional, one that could apply to logical rather than causal dependencies.
Avoiding multiple antecedents isn’t just a matter of what my conceptual toolkit can handle; I could well have done two different types of nodes, that would have represented it just fine. However restricting inferences this way makes a lot of things easier. For example it means that all inferences to any point “after” C come only from propositions that have come through C. If an inference could have multiple antecedents, then there would be inferences that combine a premise derived from K with a premise in B, and its not clear if the separation can be kept here.
From the paper you linked… Their first definiton of the counterfactual (the one where the consequent can only be a simple formula) describes the causal counterfactual (well, the indeterminstic-but-not-propabilistic protocolls throw things off a bit, but we can ignore those), and the whole “tremble” analysis closely resembles causal decision theory. Now their concept of knowledge is interesting because its defined with respect to the ecosystem, and so is implicity knowledge of the other agents strategy, but the subsequent definition of belief rather kneecaps this. The problem that belief is supposed to solve is very similar to the rederivation problem I’m trying to get around, but its formulated in terms of model theory. This seems like a bad way to formulate it, because having a model is a holistic property of the formal system, and our counterfactual surgery is basically trying to break a few things in that system without destroying the whole. And indeed the way belief is defined is basically to always assume that no further deviation from the known strategy will occur, so its impossible to use the counterfactuals based on it to evaluate different strategies. Or applying it to “strategies” of logical derivation: If you try to evaluate “what of X wasnt derived by the system”, it’ll do normal logical derivations, then the first time it would derive X it derives non-X instead, and then it continues doing correct derivations and soon derives X and therefore contradiction.
Hello Darmani,
I’m indeed talking about a kind of counterfactual conditional, one that could apply to logical rather than causal dependencies.
Avoiding multiple antecedents isn’t just a matter of what my conceptual toolkit can handle; I could well have done two different types of nodes, that would have represented it just fine. However restricting inferences this way makes a lot of things easier. For example it means that all inferences to any point “after” C come only from propositions that have come through C. If an inference could have multiple antecedents, then there would be inferences that combine a premise derived from K with a premise in B, and its not clear if the separation can be kept here.
From the paper you linked… Their first definiton of the counterfactual (the one where the consequent can only be a simple formula) describes the causal counterfactual (well, the indeterminstic-but-not-propabilistic protocolls throw things off a bit, but we can ignore those), and the whole “tremble” analysis closely resembles causal decision theory. Now their concept of knowledge is interesting because its defined with respect to the ecosystem, and so is implicity knowledge of the other agents strategy, but the subsequent definition of belief rather kneecaps this. The problem that belief is supposed to solve is very similar to the rederivation problem I’m trying to get around, but its formulated in terms of model theory. This seems like a bad way to formulate it, because having a model is a holistic property of the formal system, and our counterfactual surgery is basically trying to break a few things in that system without destroying the whole. And indeed the way belief is defined is basically to always assume that no further deviation from the known strategy will occur, so its impossible to use the counterfactuals based on it to evaluate different strategies. Or applying it to “strategies” of logical derivation: If you try to evaluate “what of X wasnt derived by the system”, it’ll do normal logical derivations, then the first time it would derive X it derives non-X instead, and then it continues doing correct derivations and soon derives X and therefore contradiction.
PM is sent.