Given that axiom schema, it seems easy for the agent to prove Prf(S) for all S.
Assumption: U is bounded, by some v that is easy to calculate.
Let C=(0=1), a contradiction.
Then consider [(Prf(C) → U=v) and U<=v] → C. By assumption U<=v is true and easy, so if Prf(C) is false, then (Prf(C) → U=v) would be true and so would C. Hence ¬Prf(C) → C. Taking the contrapositive: ¬C → Prf(C). Since ¬C is a tautology, this implies Prf(C).
A short search will also produce Prf(¬C). Then for any S, since (¬C and C) → S, the system can show Prf(S) (I’m assuming it’s expressive enough that from Prf(A) and Prf(A->B) it can get Prf(B)).
Don’t know if this blows up the system yet, but the fact that the system can prove all Prf(S) hints that something weird may be going on...
EDIT: Actually, here is how you blow up the system. Since it can demonstrate that Prf(S) is true, the axiom [(Prf(S) → U=u) and U<=u] → S reduces to U=u → S. So as long as you can show that U takes one of finitely many values, you can prove any S (and if the system is omega-consistent, it’s already blown up).
You’re right, this shows that the moral axioms as stated don’t work. Essentially [(Prf(C) → U=v) and U<=v] → C simplifies to (Prf(C) → U=v) → C, and if C is absurdity, then ~(Prf(C) → U=v), that is (~U=v and Prf(C)). Both Prf(C) and ~U=v shouldn’t hold. Thus, moral axioms in the present form shouldn’t be added for any easily-provably-false statements. Will try to figure out if the damage can be contained.
(Updated the post and its summary to mention the problem.)
One immediate idea is to replace the conditional [(Prf(S) → U=u) and U<=u] → S with the rule of inference “from [(Prf(S) → U=u) and U<=u], deduce S”. That way you can’t get a contrapositive, and you probably need to get Loebian to hope to find a contradiction.
Yes, that was the intention, and the problem is that the implication can be tugged from the wrong side, but implication can’t be one-sided. I’d prefer to stay with standard inference rules though, if at all possible.
Given that axiom schema, it seems easy for the agent to prove Prf(S) for all S.
Assumption: U is bounded, by some v that is easy to calculate.
Let C=(0=1), a contradiction.
Then consider [(Prf(C) → U=v) and U<=v] → C. By assumption U<=v is true and easy, so if Prf(C) is false, then (Prf(C) → U=v) would be true and so would C. Hence ¬Prf(C) → C. Taking the contrapositive: ¬C → Prf(C). Since ¬C is a tautology, this implies Prf(C).
A short search will also produce Prf(¬C). Then for any S, since (¬C and C) → S, the system can show Prf(S) (I’m assuming it’s expressive enough that from Prf(A) and Prf(A->B) it can get Prf(B)).
Don’t know if this blows up the system yet, but the fact that the system can prove all Prf(S) hints that something weird may be going on...
EDIT: Actually, here is how you blow up the system. Since it can demonstrate that Prf(S) is true, the axiom [(Prf(S) → U=u) and U<=u] → S reduces to U=u → S. So as long as you can show that U takes one of finitely many values, you can prove any S (and if the system is omega-consistent, it’s already blown up).
You’re right, this shows that the moral axioms as stated don’t work. Essentially [(Prf(C) → U=v) and U<=v] → C simplifies to (Prf(C) → U=v) → C, and if C is absurdity, then ~(Prf(C) → U=v), that is (~U=v and Prf(C)). Both Prf(C) and ~U=v shouldn’t hold. Thus, moral axioms in the present form shouldn’t be added for any easily-provably-false statements. Will try to figure out if the damage can be contained.
(Updated the post and its summary to mention the problem.)
One immediate idea is to replace the conditional [(Prf(S) → U=u) and U<=u] → S with the rule of inference “from [(Prf(S) → U=u) and U<=u], deduce S”. That way you can’t get a contrapositive, and you probably need to get Loebian to hope to find a contradiction.
Not confident at all that would work, though.
Yes, that was the intention, and the problem is that the implication can be tugged from the wrong side, but implication can’t be one-sided. I’d prefer to stay with standard inference rules though, if at all possible.
Pulling on one side but not the other seems textbook of what relevance logics were designed for.
Would restricting the axiom schema to content-less proposition symbols like “B” solve the problem?