If you want quantifiers, you can just program your robot to respond to the symbol “for all” so that when it sees “for all x, x=y” it writes all the implications in the notebook, and when x=y for all x, it writes “for all x, x=y”. This is an infinite amount to writing to do, but there was always an infinite amount of writing to do—the robot is infinitely fast, and anyway is just a metaphor for the rules of our language.
Sorry, I should’ve said “statements that are provable or disprovable from the axioms”, mentioning quantifiers was kinda irrelevant. Are you saying that your robot will eventually write out truth values for statements that are independent of the axioms as well? (Like the continuum hypothesis in ZFC.)
Are you assuming that x can only range over the natural numbers? If x can range over reals or sets, or some arbitrary kind of objects described by the axioms, then it’s harder to describe what the robot should do. The first problem is that an individual x can have no finite description. The second, more serious problem is that translating statements with quantifiers into statements of infinite length would require the robot to use some “true” model of the axioms, but often there are infinitely many models by Lowenheim-Skolem and no obvious way of picking out a true one.
Also, my original comment was slightly misleading—the “one true distribution” would in fact cover many statements with quantifiers, and miss many statements without quantifiers. The correct distinction is between statements that are provable or disprovable from the axioms, and statements that are independent of the axioms. If the axioms are talking about natural numbers, then all statements without quantifiers should be covered by the “one true distribution”, but in general that doesn’t have to be true.
Well, it’s certainly a good point that there are lots of mathematical issues I’m ignoring. But for the topics in this sequence, I am interested not in those issues themselves, but in how they are different between classical logic and probabilistic logic.
This isn’t trivial, since statements that are classically undetermined by the axioms can still have arbitrary probabilities (Hm, should that be its own post, do you think? I’ll have to mention it in passing when discussing the correspondence between inconsistency and limited information). But in this post, the question is whether there is no difference for statements that are provable or disprovable from the axioms. I’m claiming there’s no difference. Do you think that’s right?
Yeah, I agree with the point that classical logic would instantly settle all digits of pi, so it can’t be the basis of a theory that would let us bet on digits of pi. But that’s probably not the only reason why we want a theory of logical uncertainty. The value of a digit of pi is always provable (because it’s a quantifier-free statement), but our math intuition also allows us to bet on things like Con(PA), which is independent, or P!=NP, for which we don’t know if it’s independent. You may or may not want a theory of logical uncertainty that can cover all three cases uniformly.
I guess you meant to say “mathematical statements without quantifiers”?
If you want quantifiers, you can just program your robot to respond to the symbol “for all” so that when it sees “for all x, x=y” it writes all the implications in the notebook, and when x=y for all x, it writes “for all x, x=y”. This is an infinite amount to writing to do, but there was always an infinite amount of writing to do—the robot is infinitely fast, and anyway is just a metaphor for the rules of our language.
Sorry, I should’ve said “statements that are provable or disprovable from the axioms”, mentioning quantifiers was kinda irrelevant. Are you saying that your robot will eventually write out truth values for statements that are independent of the axioms as well? (Like the continuum hypothesis in ZFC.)
I feel like the robot metaphor may be outside of its domain of validity by now. Anyhow, I replied over in the other branch.
So if you give your robot the axioms of ZFC, it will eventually tell you if the continuum hypothesis is true or false?
Are you assuming that x can only range over the natural numbers? If x can range over reals or sets, or some arbitrary kind of objects described by the axioms, then it’s harder to describe what the robot should do. The first problem is that an individual x can have no finite description. The second, more serious problem is that translating statements with quantifiers into statements of infinite length would require the robot to use some “true” model of the axioms, but often there are infinitely many models by Lowenheim-Skolem and no obvious way of picking out a true one.
Also, my original comment was slightly misleading—the “one true distribution” would in fact cover many statements with quantifiers, and miss many statements without quantifiers. The correct distinction is between statements that are provable or disprovable from the axioms, and statements that are independent of the axioms. If the axioms are talking about natural numbers, then all statements without quantifiers should be covered by the “one true distribution”, but in general that doesn’t have to be true.
Well, it’s certainly a good point that there are lots of mathematical issues I’m ignoring. But for the topics in this sequence, I am interested not in those issues themselves, but in how they are different between classical logic and probabilistic logic.
This isn’t trivial, since statements that are classically undetermined by the axioms can still have arbitrary probabilities (Hm, should that be its own post, do you think? I’ll have to mention it in passing when discussing the correspondence between inconsistency and limited information). But in this post, the question is whether there is no difference for statements that are provable or disprovable from the axioms. I’m claiming there’s no difference. Do you think that’s right?
Yeah, I agree with the point that classical logic would instantly settle all digits of pi, so it can’t be the basis of a theory that would let us bet on digits of pi. But that’s probably not the only reason why we want a theory of logical uncertainty. The value of a digit of pi is always provable (because it’s a quantifier-free statement), but our math intuition also allows us to bet on things like Con(PA), which is independent, or P!=NP, for which we don’t know if it’s independent. You may or may not want a theory of logical uncertainty that can cover all three cases uniformly.