I tend to like constructivist maths because it makes sense from a computational point of view. A boolean function may return True or False or it may not return at all (a member of the bottom set in Type theory terms) if it gets stuck in an infinite loop.
So I’m going to look at things from a computational point of view, to see if anything shakes loose. What might ~(A and ~A) represent computationally. The one thing I can think of is if A isn’t a pure function, it is an unknown monad of some variety (there is some hidden state). A race condition or memory corruption that means when A is evaluated at different points in time it comes to a different value.
Going by the MWI hint, it also seems to be able to represent questions that aren’t meaningful to ask. Did a particle travel through that path or did it not travel that path? Well it travelled through all possible paths, which is neither a yes or a no. You may want a logic that can represent such meaningless questions, because you can’t apriori know that a question is meaningless.
So I’m going to look at things from a computational point of view, to see if anything shakes loose. What might ~(A and ~A) represent computationally.
In intuitionistic type theory, if you use the usual definitions that
%20:=%20(X\to\bot)) and that XwedgeY means the set of pairs of elements of X and elements of Y, then ) is inhabited because given an element a:A and a function f:Atobot, you can apply f to a to get f,a:bot. This still works if you replace bot by some other set which may not be empty (giving the whole thing more computational content but still leaving ECQ trivially true). Perhaps you could come up with an even more different definition of negX?
Ah, wait. I think it’s much simpler—and “much more dual.”
Skimming some things about anti-intuitionistic logic, in “Anti-intuitionism and paraconsistency” by Brunner and Carnielli, I found the following slogan:
This denomination can be understood taking into account that, as far as the intuitionistic philosophic program is committed to constructing truthood, our anti-constructive logics can be seen as committed to eliminating falsehood.
So how about we keep the computational content of intuitionistic logic/type theory exactly, but change our logical interpretation of these constructs? Instead of asking what computational object it would take to prove a given logical statement, let’s ask what it would take to disprove this statement.
(A and B), intuitionistic interpretation: To prove this you need a proof of A and a proof of B, so if A and B are types, (A and B) is the type of pairs of elements of A and elements of B.
(A and B), anti-intuitionistic interpretation: To disprove this you need to say that you’re going to disprove A and then disprove it, or you need to say that you’re going to disprove B and then disprove it, so if A and B are types, (A and B) is the disjoint union of A and B.
(A or B), intuitionistic: To prove this, say you’re going to prove A then prove it, or say you’re going to prove B then prove it; (A or B) is the disjoint union of A and B.
(A or B), anti-intuitionistic: To disprove this you need to disprove both A and B; elements of (A or B) are pairs of elements of A and elements of B.
(exists x:X. P(x)), intuitionistic: To prove this, tell us an element x of X, then prove P(x). Elements of (exists x:X. P(x)) are pairs where the first element is an element x of X, and the second element is an element of P(x).
(exists x:X. P(x)), anti-intuitionistic: To disprove this, you need to disprove P(x) for every possible element x of X. Elements of (exists x:X. P(x)) are functions taking an element x of X and returning an element of P(x).
(forall x:X. P(x)), intuitionistic: To prove this, you need to prove P(x) for every possible element x of X. Elements of (forall x:X. P(x)) are functions taking an element x of X and returning an element of P(x).
(forall x:X. P(x)), anti-intuitionistic: To disprove this, tell us an element x of X, then disprove P(x). Elements of (forall x:X. P(x)) are pairs where the first element is an element x of X, and the second element is an element of P(x).
(False), intuitionistic: There is no proof of this, so this is the empty type.
(False), anti-intuitionistic: This is trivial to disprove, so this is the unit type.
(True), intuitionistic: This is trivial to prove, so this is the unit type.
(True), anti-intuitionistic: This is impossible to disprove, so this is the empty type.
(A implies B), intuitionistic: To prove this, you need to give a proof of B if you’re given a proof of A, so this is the type of functions from A to B.
This time, instead of asking what (A implies B) is in anti-intuitionistic logic (because what you do is that in fact there simply is no implication connective), let’s ask: what logical connective should anti-intuitionistically map to the type of functions from A to B? I.e., what connective is such that to disprove it, you need to be able to convert a disproof of A into a disproof of B?
Actually, what we do is to simply invent a new one, “pseudo-difference”, or “but not”, which corresponds to the classical logic (A and (not B)) in the same way that intuitionistic implication corresponds to the classical ((not A) or B). Thus:
(A but not B) aka (A—B), anti-intuitionistically: To disprove this, you need to give a disproof of A given a disproof of B, so this is the type of functions from B to A.
Finally, (not X) is defined as (True—X), with the same interpretation of “a function from X to the empty type” as in intuitionistic logic.
Hm, maybe we can formulate this using just the intuinistic provability judgement.
We associate each formula A with a formula〈A〉which is the type of refutations of A, such that A is disprovable if 〈A〉is (intuinistically) provable. The cases you give for ∧,∨,∃,∀, True and False become
〈A∧B〉= 〈A〉∨〈B〉
〈A∨B〉= 〈A〉∧〈B〉
〈∃ (x:A).B〉= ∀(x:A).〈B〉
〈∀ (x:A).B〉= ∃(x:A).〈B〉
〈True〉= False
〈False〉= True
and we can indeed see that this is “dual” in the sense that 〈〈A〉〉= A.
The domain of the ∀ and ∃ cases is interesting. If we are using just first-order logic it can be omitted, but in higher-order logic and Type Theory we want to say if we are quantifying over individuals, relations, etc. I think the translation of e.g. ∀ (x:A).B makes sense: in order to disprove that we should give a concrete (intuinistically acceptable) construction of an element x that causes it to fail, so we stay in “construct a proof” mode for the A.
But now we have already defined the translation of implication! Using the standard type theory trick to encode A→B as ∀ (x:A).B, where x is a fresh variable not free in B, we get
〈A→B 〉= 〈∀(x:A).B〉=∃ (x:A).〈B〉= A×〈B〉
a pair of a proof of A and a disproof of B. This seems related to your “but not” operator, except here it mixes the logical symbol × and the syntactic operation 〈〉. Similarly the translation of not becomes just
〈not A〉=〈A→False 〉= A × True ~= A
So this basically just the not operator in disguise. The “dualizing translation” in the paper you linked does indeed use a but-not symbol, I guess I should go and read that paper...
Constructivist logic works great if you interpret it as saying which statements can be proven, or computed, but I would say it doesn’t hold up when interpreted as showing which statements are true (given your axioms). It’s therefore not really appropriate for mathematics, unless you want to look at mathematics in the light of its computational or proof-theoretic properties.
I tend to like constructivist maths because it makes sense from a computational point of view. A boolean function may return True or False or it may not return at all (a member of the bottom set in Type theory terms) if it gets stuck in an infinite loop.
So I’m going to look at things from a computational point of view, to see if anything shakes loose. What might ~(A and ~A) represent computationally. The one thing I can think of is if A isn’t a pure function, it is an unknown monad of some variety (there is some hidden state). A race condition or memory corruption that means when A is evaluated at different points in time it comes to a different value.
Going by the MWI hint, it also seems to be able to represent questions that aren’t meaningful to ask. Did a particle travel through that path or did it not travel that path? Well it travelled through all possible paths, which is neither a yes or a no. You may want a logic that can represent such meaningless questions, because you can’t apriori know that a question is meaningless.
In intuitionistic type theory, if you use the usual definitions that
%20:=%20(X\to\bot)) and that XwedgeY means the set of pairs of elements of X and elements of Y, then ) is inhabited because given an element a:A and a function f:Atobot, you can apply f to a to get f,a:bot. This still works if you replace bot by some other set which may not be empty (giving the whole thing more computational content but still leaving ECQ trivially true). Perhaps you could come up with an even more different definition of negX?Ah, wait. I think it’s much simpler—and “much more dual.”
Skimming some things about anti-intuitionistic logic, in “Anti-intuitionism and paraconsistency” by Brunner and Carnielli, I found the following slogan:
So how about we keep the computational content of intuitionistic logic/type theory exactly, but change our logical interpretation of these constructs? Instead of asking what computational object it would take to prove a given logical statement, let’s ask what it would take to disprove this statement.
(A and B), intuitionistic interpretation: To prove this you need a proof of A and a proof of B, so if A and B are types, (A and B) is the type of pairs of elements of A and elements of B.
(A and B), anti-intuitionistic interpretation: To disprove this you need to say that you’re going to disprove A and then disprove it, or you need to say that you’re going to disprove B and then disprove it, so if A and B are types, (A and B) is the disjoint union of A and B.
(A or B), intuitionistic: To prove this, say you’re going to prove A then prove it, or say you’re going to prove B then prove it; (A or B) is the disjoint union of A and B.
(A or B), anti-intuitionistic: To disprove this you need to disprove both A and B; elements of (A or B) are pairs of elements of A and elements of B.
(exists x:X. P(x)), intuitionistic: To prove this, tell us an element x of X, then prove P(x). Elements of (exists x:X. P(x)) are pairs where the first element is an element x of X, and the second element is an element of P(x).
(exists x:X. P(x)), anti-intuitionistic: To disprove this, you need to disprove P(x) for every possible element x of X. Elements of (exists x:X. P(x)) are functions taking an element x of X and returning an element of P(x).
(forall x:X. P(x)), intuitionistic: To prove this, you need to prove P(x) for every possible element x of X. Elements of (forall x:X. P(x)) are functions taking an element x of X and returning an element of P(x).
(forall x:X. P(x)), anti-intuitionistic: To disprove this, tell us an element x of X, then disprove P(x). Elements of (forall x:X. P(x)) are pairs where the first element is an element x of X, and the second element is an element of P(x).
(False), intuitionistic: There is no proof of this, so this is the empty type.
(False), anti-intuitionistic: This is trivial to disprove, so this is the unit type.
(True), intuitionistic: This is trivial to prove, so this is the unit type.
(True), anti-intuitionistic: This is impossible to disprove, so this is the empty type.
(A implies B), intuitionistic: To prove this, you need to give a proof of B if you’re given a proof of A, so this is the type of functions from A to B.
This time, instead of asking what (A implies B) is in anti-intuitionistic logic (because what you do is that in fact there simply is no implication connective), let’s ask: what logical connective should anti-intuitionistically map to the type of functions from A to B? I.e., what connective is such that to disprove it, you need to be able to convert a disproof of A into a disproof of B?
Actually, what we do is to simply invent a new one, “pseudo-difference”, or “but not”, which corresponds to the classical logic (A and (not B)) in the same way that intuitionistic implication corresponds to the classical ((not A) or B). Thus:
(A but not B) aka (A—B), anti-intuitionistically: To disprove this, you need to give a disproof of A given a disproof of B, so this is the type of functions from B to A.
Finally, (not X) is defined as (True—X), with the same interpretation of “a function from X to the empty type” as in intuitionistic logic.
Hm, maybe we can formulate this using just the intuinistic provability judgement. We associate each formula A with a formula〈A〉which is the type of refutations of A, such that A is disprovable if 〈A〉is (intuinistically) provable. The cases you give for ∧,∨,∃,∀, True and False become
〈A∧B〉= 〈A〉∨〈B〉
〈A∨B〉= 〈A〉∧〈B〉
〈∃ (x:A).B〉= ∀(x:A).〈B〉
〈∀ (x:A).B〉= ∃(x:A).〈B〉
〈True〉= False
〈False〉= True
and we can indeed see that this is “dual” in the sense that 〈〈A〉〉= A.
The domain of the ∀ and ∃ cases is interesting. If we are using just first-order logic it can be omitted, but in higher-order logic and Type Theory we want to say if we are quantifying over individuals, relations, etc. I think the translation of e.g. ∀ (x:A).B makes sense: in order to disprove that we should give a concrete (intuinistically acceptable) construction of an element x that causes it to fail, so we stay in “construct a proof” mode for the A.
But now we have already defined the translation of implication! Using the standard type theory trick to encode A→B as ∀ (x:A).B, where x is a fresh variable not free in B, we get
〈A→B 〉= 〈∀(x:A).B〉=∃ (x:A).〈B〉= A×〈B〉
a pair of a proof of A and a disproof of B. This seems related to your “but not” operator, except here it mixes the logical symbol × and the syntactic operation 〈〉. Similarly the translation of not becomes just
〈not A〉=〈A→False 〉= A × True ~= A
So this basically just the not operator in disguise. The “dualizing translation” in the paper you linked does indeed use a but-not symbol, I guess I should go and read that paper...
Constructivist logic works great if you interpret it as saying which statements can be proven, or computed, but I would say it doesn’t hold up when interpreted as showing which statements are true (given your axioms). It’s therefore not really appropriate for mathematics, unless you want to look at mathematics in the light of its computational or proof-theoretic properties.