“(2+2=4) and ~(2+2=4)” is a statement in arithmetic and propositional logic, which are quite distinct from Bayeasian Probability. Mathematics really does have separate magisteria, because it is not necessarily connected to practical reality.
Any (A & ~A) works just as well. How can you assign greater than zero probability to a contradiction? Doesn’t the whole system fall apart? Same thing for less than one probability to a tautology. If there are probability theories that do that, I would like to know of them.
This is the third time I try to reply to this, because it appears there are some serious inferential distances and I would rather not appear condescending.
Firstly, you must remember that a contradiction has no definite truth or falsehood value.
Now I have seen some interesting papers that make expanded probability theories that include 0 and 1 as logical falsehood and truth respectively. But that still does not include a special value for contradictions.
If we turn to Type theory (think programming) we might say the contradiction is an expression that has the bottom type (a type with no values, error in any case), but how do you put no value into probability theory?
All in all it is mathematically meaningless to talk about the probability of a logical contradiction, but don’t ask me for a proof of it, I am not that good.
So if you really want you can say that 0 is falsehood and 1 is truth and with a bit of sleight of hand you can use them as probabilities. But you will be stunted when you hit a contradiction.
Um, you can derive from Komolgorov that (A & ~A) has prob zero. Very easily.
If (A & B) = nullset, then P(A & B) = 0. (A & ~A) = nullset so: P(A & ~A) = 0.
In other words, if two sets are disjoint, the probability of their intersection is zero. Any set and its compliment are disjoint, so the probability of their conjunction is zero.
It also helps if you think of the boolean variables as fractions of a dart board, and the probability as the area of that fraction. The formalism is perfectly isomorphic. Obviously, the intersection of any fraction and its compliment will have an area of zero.
See, I believe I have a proof that the probability of any contradiction is zero. So I am going to have to ask for a proof to change my mind, or a problem with my proof (which I doubt, it’s very short, and I math often).
Oh I see, you are talking of exclusive outcomes, not contradictions. Yes you are entirely right, exclusive outcomes work exactly that way. The probability of both Heads and Tails occuring at the same time on a coinflip is zero.
Contradictions in a system isomorphic with Propositional Logic do not and are an entirely separate mathematical object.
The Principle of explosion tells us that a if one supposes both truth and falsehood as base premises, one can derive any conclusion, by means of Propositional Logic theorems.
The intersection of a set with it’s complement is the empty set.
A consistent isomorphism of ZF Set theory and Propositional Logic is achieved by letting truth be a non-empty set and falshood be an empty set. Then intersection becomes logical conjunction and complement with respect to the truth set becomes logical inversion.
Now, the reason I can argue for such things is actually backed up by Godel’s incompleteness theorem. Propositional Logic and ZF Set Theory can both implement Peano Arithmetic, Bayesian Probability cannot. Thus Propositional Logic and ZF Set Theory are Complete but not Consistent, while Bayes is Consistent but not Complete.
argue for what things? I have no clue what the POE or Gtheorem have to do with komologrov provably assigning zero probability to contradictions.
What is the difference between exclusive outcomes and contradictions? How are they not the same mathematical object? If two exclusive outcomes end up resulting, you can also explode the universe.
For A to be exclusive with B, means that if A happened B did not.
So: If A and B, then ~B and B, then B or P, then ~B, so P.
The POE is not something unique to contradictions which exclusive outcomes lack.
Now I have seen some interesting papers that make expanded probability theories that include 0 and 1 as logical falsehood and truth respectively. But that still does not include a special value for contradictions.
Except, contradictions really are the only way you can get to logical truth or falsehood; anything other than that necessarily relies on inductive reasoning at some point. So any probability theory employing those must use contradictions as a means for arriving at these values in the first place.
I do think that there’s not much room for contradictions in probability theories trying to actually work in the real world, in the sense that any argument of the form A->(B & ~B) also has to rely on induction at some point; but it’s still helpful to have an anchor where you can say that, if a certain relationship does exist, then a certain proposition is definitely true.
(This is not like saying that a proposition can have a probability of 0 or 1, because it must rely, at least somewhere down the line, on another proposition with a probability different from 0 and 1).
ISTM that you’re using the word contradiction in a non-standard way: in the usual sense they are logical falsehoods. What do you actually mean? (ETA: I guess paradoxes such as “This sentence is false”?)
It is not a logical falsehood for several reasons. What I actually mean by using the traditional notation of propositional calculus is that the statement A is a true statement. Were it a false statement I would write ~A.
Similarly i write (P & ~P) to mean “It is true that both P and not P” while I write ~(P & ~P) to mean “It is true that not both P and not P.”
Solving the latter as the equation ~(P & ~P) = TRUE for the variable P gives the trivial solution set of {TRUE, FALSE}, solving the former equation (P & ~P) = TRUE for the variable P gives the empty solution set {}
This is simple convention of notation, I am sorry if that wasn’t clear. Yes, evaluating the logic arithmetic statement (P & ~P) for any given boolean value of P yields false.
Logical falsehoods, and disjoint events, all reduce to contradictions in boole. There is no difference in propositional logic. Where are you getting this from?
I think you and potato are talking about different things; potato’s criticism is close to what’s discussed in this post.
But for what it’s worth, Bayesians with finite (and flawed) computational powers can meaningfully assign probabilities to mathematical statements, and update as they prove more theorems.
More over, we would make little progress in prime analysis if we could not use probability theory to restrict expected experience in purely formal environments.
“(2+2=4) and ~(2+2=4)” is a statement in arithmetic and propositional logic, which are quite distinct from Bayeasian Probability. Mathematics really does have separate magisteria, because it is not necessarily connected to practical reality.
Any (A & ~A) works just as well. How can you assign greater than zero probability to a contradiction? Doesn’t the whole system fall apart? Same thing for less than one probability to a tautology. If there are probability theories that do that, I would like to know of them.
This is the third time I try to reply to this, because it appears there are some serious inferential distances and I would rather not appear condescending.
Firstly, you must remember that a contradiction has no definite truth or falsehood value.
Now I have seen some interesting papers that make expanded probability theories that include 0 and 1 as logical falsehood and truth respectively. But that still does not include a special value for contradictions.
If we turn to Type theory (think programming) we might say the contradiction is an expression that has the bottom type (a type with no values, error in any case), but how do you put no value into probability theory?
All in all it is mathematically meaningless to talk about the probability of a logical contradiction, but don’t ask me for a proof of it, I am not that good.
So if you really want you can say that 0 is falsehood and 1 is truth and with a bit of sleight of hand you can use them as probabilities. But you will be stunted when you hit a contradiction.
Complete or Consistent, choose one.
Um, you can derive from Komolgorov that (A & ~A) has prob zero. Very easily.
If (A & B) = nullset, then P(A & B) = 0.
(A & ~A) = nullset
so: P(A & ~A) = 0.
In other words, if two sets are disjoint, the probability of their intersection is zero. Any set and its compliment are disjoint, so the probability of their conjunction is zero.
It also helps if you think of the boolean variables as fractions of a dart board, and the probability as the area of that fraction. The formalism is perfectly isomorphic. Obviously, the intersection of any fraction and its compliment will have an area of zero.
See, I believe I have a proof that the probability of any contradiction is zero. So I am going to have to ask for a proof to change my mind, or a problem with my proof (which I doubt, it’s very short, and I math often).
Oh I see, you are talking of exclusive outcomes, not contradictions. Yes you are entirely right, exclusive outcomes work exactly that way. The probability of both Heads and Tails occuring at the same time on a coinflip is zero.
Contradictions in a system isomorphic with Propositional Logic do not and are an entirely separate mathematical object.
I would like to read more on that, because I believed them to be exactly equivalent.
A set’s intersection with its compliment
has a perfect isomorphism with
A propositions conjunction with its negation.
The Principle of explosion tells us that a if one supposes both truth and falsehood as base premises, one can derive any conclusion, by means of Propositional Logic theorems.
The intersection of a set with it’s complement is the empty set.
A consistent isomorphism of ZF Set theory and Propositional Logic is achieved by letting truth be a non-empty set and falshood be an empty set. Then intersection becomes logical conjunction and complement with respect to the truth set becomes logical inversion.
Now, the reason I can argue for such things is actually backed up by Godel’s incompleteness theorem. Propositional Logic and ZF Set Theory can both implement Peano Arithmetic, Bayesian Probability cannot. Thus Propositional Logic and ZF Set Theory are Complete but not Consistent, while Bayes is Consistent but not Complete.
argue for what things? I have no clue what the POE or Gtheorem have to do with komologrov provably assigning zero probability to contradictions.
What is the difference between exclusive outcomes and contradictions? How are they not the same mathematical object? If two exclusive outcomes end up resulting, you can also explode the universe.
For A to be exclusive with B, means that if A happened B did not. So: If A and B, then ~B and B, then B or P, then ~B, so P. The POE is not something unique to contradictions which exclusive outcomes lack.
This comment explains where our communication runs skew
Except, contradictions really are the only way you can get to logical truth or falsehood; anything other than that necessarily relies on inductive reasoning at some point. So any probability theory employing those must use contradictions as a means for arriving at these values in the first place.
I do think that there’s not much room for contradictions in probability theories trying to actually work in the real world, in the sense that any argument of the form A->(B & ~B) also has to rely on induction at some point; but it’s still helpful to have an anchor where you can say that, if a certain relationship does exist, then a certain proposition is definitely true.
(This is not like saying that a proposition can have a probability of 0 or 1, because it must rely, at least somewhere down the line, on another proposition with a probability different from 0 and 1).
ISTM that you’re using the word contradiction in a non-standard way: in the usual sense they are logical falsehoods. What do you actually mean? (ETA: I guess paradoxes such as “This sentence is false”?)
I use contradiction in the completely ordinary sense as seen in propositional logic. (P & ~P)
How is that not a logical falsehood?
It is not a logical falsehood for several reasons. What I actually mean by using the traditional notation of propositional calculus is that the statement A is a true statement. Were it a false statement I would write ~A. Similarly i write (P & ~P) to mean “It is true that both P and not P” while I write ~(P & ~P) to mean “It is true that not both P and not P.”
Solving the latter as the equation ~(P & ~P) = TRUE for the variable P gives the trivial solution set of {TRUE, FALSE}, solving the former equation (P & ~P) = TRUE for the variable P gives the empty solution set {}
This is simple convention of notation, I am sorry if that wasn’t clear. Yes, evaluating the logic arithmetic statement (P & ~P) for any given boolean value of P yields false.
Logical falsehoods, and disjoint events, all reduce to contradictions in boole. There is no difference in propositional logic. Where are you getting this from?
See this comment
I think you and potato are talking about different things; potato’s criticism is close to what’s discussed in this post.
But for what it’s worth, Bayesians with finite (and flawed) computational powers can meaningfully assign probabilities to mathematical statements, and update as they prove more theorems.
More over, we would make little progress in prime analysis if we could not use probability theory to restrict expected experience in purely formal environments.