Thanks for the pointers to these… I’ll probably move my comments to there. On your other point:
In our models so far, this isn’t a problem, you just use a factory-standard first order inference system.
and…
If I’m trying to reason about counterpossibles (or impossible possible words) then the usual laws of logic and proof just fly out of the window. I can’t legitimately infer from A → B and A-> ~B that ~A.
The inference rule A-> B, A->~B |- ~A is the familiar principle of proof by contradiction i.e. “reductio an absurdam” of the alternative. The difficulty I see is that you can’t really use reduction ad absurdam in an impossible worlds semantics, because the impossible worlds are—ahem—absurd. That’s kinda the point.
I haven’t read all the details of what you’re trying, but my suspicion is that using standard off-the-shelf logics won’t work here: you will at least need to limit the application of the inference rules e.g. so that they can be applied only up to a maximum finite number of times (and so the set of theorems is not closed under logical inference). Maybe you need some sort of belief semantics, where the “worlds” admitted into the belief-system are strictly a mixture of possible and impossible worlds, but the agent hasn’t realized yet that the impossible ones actually are impossible or will never realize it, so they “seem” to be possible. So he’s chugging along analysing those worlds using inference rules that will eventually hit a contradiction, but not yet. For “practical purposes” the system is consistent and won’t blow up. Though be very careful, because an inconsistent system with the rule (“If I prove I won’t do a, then do a”) could accidentally prove that it won’t nuke the planet and then (inconsistently) nuke the planet. Oops.
my suspicion is that using standard off-the-shelf logics won’t work here: you will at least need to limit the application of the inference rules e.g. so that they can be applied only up to a maximum finite number of times (and so the set of theorems is not closed under logical inference)
This happens automatically, because (1) only the statements contributing to the decision matter, and there’s a finite number of them, and (2) presence of the things like the diagonal step/chicken rule in the decision algorithm implies that inferring of absurdity doesn’t happen. So we can prove that it’s not the case that an agent can infer absurdity, even though it’s free to use any first order inference it wants, and even though absurdity does follow from the axioms (in the setting without provability oracle).
In the setting with provability oracle, agent’s algorithm is constructed in such a way that its axioms become sufficiently weak that the impossible counterfactuals (from the point of view of a stronger theory) remain consistent according to the theory used by the agent, and so in that setting the impossible worlds have actual models.
Thanks for the pointers to these… I’ll probably move my comments to there. On your other point:
and…
The inference rule A-> B, A->~B |- ~A is the familiar principle of proof by contradiction i.e. “reductio an absurdam” of the alternative. The difficulty I see is that you can’t really use reduction ad absurdam in an impossible worlds semantics, because the impossible worlds are—ahem—absurd. That’s kinda the point.
I haven’t read all the details of what you’re trying, but my suspicion is that using standard off-the-shelf logics won’t work here: you will at least need to limit the application of the inference rules e.g. so that they can be applied only up to a maximum finite number of times (and so the set of theorems is not closed under logical inference). Maybe you need some sort of belief semantics, where the “worlds” admitted into the belief-system are strictly a mixture of possible and impossible worlds, but the agent hasn’t realized yet that the impossible ones actually are impossible or will never realize it, so they “seem” to be possible. So he’s chugging along analysing those worlds using inference rules that will eventually hit a contradiction, but not yet. For “practical purposes” the system is consistent and won’t blow up. Though be very careful, because an inconsistent system with the rule (“If I prove I won’t do a, then do a”) could accidentally prove that it won’t nuke the planet and then (inconsistently) nuke the planet. Oops.
This happens automatically, because (1) only the statements contributing to the decision matter, and there’s a finite number of them, and (2) presence of the things like the diagonal step/chicken rule in the decision algorithm implies that inferring of absurdity doesn’t happen. So we can prove that it’s not the case that an agent can infer absurdity, even though it’s free to use any first order inference it wants, and even though absurdity does follow from the axioms (in the setting without provability oracle).
In the setting with provability oracle, agent’s algorithm is constructed in such a way that its axioms become sufficiently weak that the impossible counterfactuals (from the point of view of a stronger theory) remain consistent according to the theory used by the agent, and so in that setting the impossible worlds have actual models.