Paraconsistency and relevance: avoid logical explosions

EDIT: corrected from previous version.

If the moon is made of cheese, then Rafael Delago was elected president of Ecuador in 2005.

If you believe that Kennedy was shot in 1962, then you must believe that Santa Claus is the Egyptian god of the dead.

Both of these are perfectly sound arguments of classical logic. The premise is false, hence the argument is logically correct, no matter what the conclusion is: if A is false, then A→B is true.

It does feel counterintuitive, though, especially because human beliefs do not work in this way. Consider instead the much more intuitive statement:

If you believe that Kennedy was shot in 1962, then you must believe that Lee Harry Oswald was also shot in 1962.

Here there seems to be a connection between the two clauses; we feel A→B is more justified when “→” actually does some work in establishing a relationship between A and B. But can this intuition be formalised?

One way to do so is to use relevance logics, which are a subset of “paraconsistent” logics. Paraconsistent logics are those that avoid the principle of explosion. This is the rule in classical logic that if you accept one single contradiction—one single (A and not-A) - then you can prove everything. This is akin to accepting one false belief that contradict your other beliefs—after that, anything goes. The contradiction explodes and takes everything down with it. But why would we be interested in avoiding either the principle of explosion or unjustified uses of “→”?

There seems to be three groups that could benefit from avoiding this. Firstly, those who are worried about the potential for the occasional error in their data or their premises, or a missed step in their reasoning, and don’t want to collapse into incoherence because of a single mistake (paraconsistency has had application in database management, for instance). These generally need only ‘weakly’ paraconsistent theories. Secondly, the dialethics, who believe in the existence of true contradictions. The liar’s paradox is an example of this: if L=”L is false”, then a dialethic would simply say that L is true, and not-L is also true, accepting the contradiction (L and not-L). This has the advantage of allowing a language to talk about its own truth: arithmetic truths can be defined in arithmetic, if we accept a few contradictions along the way.

For Less Wrong, the best use of relevance logic would be to articulate counterfactuals without falling into the Löbian/​self-confirming trap and blowing up. Consider the toy problem:

def U():
if A()==1:
return 5
else:
return 10

Then the problem is that in UDT, sentences such as L=”(A()==1 → U() == 5) and (A()!=1 → U() == −200)” are self-confirming: if they are accepted by the utility maximising agent A(), then they will become valid. This is because A() will then output 1, making the first clause valid by calculation, and the second clause valid because the antecedent is false. This leads to all sorts of Löbian problems. However, if we reject the gratuitous use of “→”, then even if we kept Löbian reasoning, the argument would fail, as L would no longer be self-confirming.

Ok, as the actor said, that’s my motivation; now, how do we do it? Where does the principle of explosion come from, and what do we have to do to get rid of it? Allegedly, one mathematician once defended the argument “(0=1) implies (I am God!)” by saying “(0=1) implies (1=2); (I and God are two) hence (I and God are one)!”. The more rigorous proof, starting from the false premise (A and not-A), and proving any B, goes as follows (terminology will be explained):

  1. A and not-A (premise)

  2. A (by conjunction elimination from (1))

  3. not-A (by conjunction elimination from (1))

  4. A or B (by disjunction introduction from (2))

  5. B (by disjunction syllogism from (3) and (4))

To reject this proof, we have four options: reject conjunction elimination, reject disjunction introduction, reject the disjunction syllogism, or reject transitive proofs: say that, for instance, “(2) and (3)” implies “(4)”, “(3) and (4)” implies “(5)”, but reject the implication that “(2) and (3)” implies “(5)”.

Rejecting transitive proofs is almost never done: what is the point of a proof system if you can’t build on previous results? Conjunction elimination says that “(A and B) is true” means that both A and B are true; this seems far too fundamental to our understanding of ‘and’ to be tossed aside.

Disjunction introduction says that “A is true” implies that “(A or B) is true” for any B. This is also intuitive, though possibly a little less so; we are randomly inserting a B of which we know nothing. There are paraconsistent logics that reject disjunction introduction, but we won’t be looking at them here (and why, I hear you ask? For the deep philosophical reason that the book I’m reading doesn’t go into them).

That leaves the disjunction syllogism. This claims that from (A or B) and (not-A) we can deduce B. It is certainly formally intuitive, so in my next post I’ll present a simple and reasonably intuitive paraconsistent system of logic that rejects it.