OK, so the trouble with logical induction is assuming mathematical realism, where “the claim that the 87,653rd digit of π is a 7” is either true or false even when not yet evaluated by someone, and the paper is discussing a way to assign a reasonable probability to it (e.g. 1⁄10 in this case if you know nothing about digits or pi apriori) using the trading market model. In which case the implication condition does not hold ever, (since the chance of making an error in calculating the 87,653rd digit of π is always larger than in calculating 1+1). So they are treating logical uncertainty as environmental then. It makes sense if so.
To elaborate, A->B is an operation with a truth table:
A B A->B
T T T
T F F
F T T
F F T
The only thing that falsifies A->B is if A is true but B is false. This is different from how we usually think about implication, because it’s not like there’s any requirement that you can deduce B from A. It’s just a truth table.
But it is relevant to probability, because if A->B, then you’re not allowed to assign high probability to A but low probability to B.
EDIT: Anyhow I think that paragraph is a really quick and dirty way of phrasing the incompatibility of logical uncertainty with normal probability. The issue is that in normal probability, logical steps are things that are allowed to happen inside the parentheses of the P() function. No matter how complicated the proof of φ, as long as the proof follows logically from premises, you can’t doubt φ more than you doubt the premises, because the P() function thinks that P(premises) and P(logical equivalent of premises according to Boolean algebra) are “the same thing.”
A⇒B is true iff one of (i) A is false or (ii) B is true. Therefore, if Y is some true sentence, X⇒Y for any X. Here, Y is Φ.
OK, so the trouble with logical induction is assuming mathematical realism, where “the claim that the 87,653rd digit of π is a 7” is either true or false even when not yet evaluated by someone, and the paper is discussing a way to assign a reasonable probability to it (e.g. 1⁄10 in this case if you know nothing about digits or pi apriori) using the trading market model. In which case the implication condition does not hold ever, (since the chance of making an error in calculating the 87,653rd digit of π is always larger than in calculating 1+1). So they are treating logical uncertainty as environmental then. It makes sense if so.
To elaborate, A->B is an operation with a truth table:
The only thing that falsifies A->B is if A is true but B is false. This is different from how we usually think about implication, because it’s not like there’s any requirement that you can deduce B from A. It’s just a truth table.
But it is relevant to probability, because if A->B, then you’re not allowed to assign high probability to A but low probability to B.
EDIT: Anyhow I think that paragraph is a really quick and dirty way of phrasing the incompatibility of logical uncertainty with normal probability. The issue is that in normal probability, logical steps are things that are allowed to happen inside the parentheses of the P() function. No matter how complicated the proof of φ, as long as the proof follows logically from premises, you can’t doubt φ more than you doubt the premises, because the P() function thinks that P(premises) and P(logical equivalent of premises according to Boolean algebra) are “the same thing.”