If you don’t want to assume the existence of certain propositions, you’re asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.
Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseudo-complements. Take < to be the ordering on the lattice. (a <-- b) if for all x in the lattice L,
for all x, b < (a or x) if and only if a <-- b < x
Now, we take a probability function to be a function from elements of L to the reals, satisfying the following axioms:
P(false) = 0
if A < B then P(A) ⇐ P(B)
P(A or B) + P(A and B) = P(A) + P(B)
There you go. Probability theory without certainty.
This is not terribly satisfying, though, since Bayes’s theorem stops working. It fails because conditional probabilities stop working—they arise from a forced normalization that occurs when you try to construct a lattice homomorphism between an event space and a conditionalized event space.
That is, in ordinary probability theory (where L is a Boolean algebra, and P(true) = 1), you can define a conditionalization space L|A as follows:
L|A = { X in L | X < A }
true’ = A
false’ = false
and’ = and
or’ = or
not’(X) = not(X) and A
P’(X) = P(X)/P(A)
with a lattice homomorphism X|A = X and A
Then, the probability of a conditionalized event P’(X|A) = P(X and A)/P(A), which is just what we’re used to. Note that the definition of P’ is forced by the fact that L|A must be a probability space. In the non-certain variant, there’s no unique definition of P’, so conditional probabilities are not well-defined.
To regain something like this for cointuitionistic logic, we can switch to tracking degrees of disbelief, rather than degrees of belief. Say that:
D(false) = 1
for all A, D(A) > 0
if A < B then D(A) >= D(B)
D(A or B) + D(A and B) = D(A) + D(B)
This will give you the bounds you need to let you need to nail down a conditional disbelief function. I’ll leave that as an exercise for the reader.
If you don’t want to assume the existence of certain propositions, you’re asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.
Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseudo-complements. Take < to be the ordering on the lattice. (a <-- b) if for all x in the lattice L,
for all x, b < (a or x) if and only if a <-- b < x
Now, we take a probability function to be a function from elements of L to the reals, satisfying the following axioms:
P(false) = 0
if A < B then P(A) ⇐ P(B)
P(A or B) + P(A and B) = P(A) + P(B)
There you go. Probability theory without certainty.
This is not terribly satisfying, though, since Bayes’s theorem stops working. It fails because conditional probabilities stop working—they arise from a forced normalization that occurs when you try to construct a lattice homomorphism between an event space and a conditionalized event space.
That is, in ordinary probability theory (where L is a Boolean algebra, and P(true) = 1), you can define a conditionalization space L|A as follows:
L|A = { X in L | X < A } true’ = A false’ = false and’ = and or’ = or not’(X) = not(X) and A P’(X) = P(X)/P(A)
with a lattice homomorphism X|A = X and A
Then, the probability of a conditionalized event P’(X|A) = P(X and A)/P(A), which is just what we’re used to. Note that the definition of P’ is forced by the fact that L|A must be a probability space. In the non-certain variant, there’s no unique definition of P’, so conditional probabilities are not well-defined.
To regain something like this for cointuitionistic logic, we can switch to tracking degrees of disbelief, rather than degrees of belief. Say that:
D(false) = 1
for all A, D(A) > 0
if A < B then D(A) >= D(B)
D(A or B) + D(A and B) = D(A) + D(B)
This will give you the bounds you need to let you need to nail down a conditional disbelief function. I’ll leave that as an exercise for the reader.