Is there a reason to think this would be different than any other kind of induction or Bayesian reasoning? We use probabilities to describe things for which there is a true answer that we happen not to know all the time. Probability is often (arguably always) subjective in that way. For example, what is the probability that you, Eigil Rischel, have any siblings? The answer, in an objective sense, is either 0 or 1. The answer, from your subjective perspective, is either very close to 0 or very close to 1. But from my perspective not knowing anything about you, I’m going to put it at 0.7. If I wanted a better estimate, I could actually look up what fraction of people have siblings and use that. If I wanted an even better estimate, I could ask you. But right now, from my perspective, the probability of you having siblings is 0.7. This seems straight forward for physical truths, I don’t see any real difference for mathematical truths. You should be able to use all the standard rules of probability theory, Bayes theorem, etc.
I’m unsure that your second bullet point follows. For that limit to work, I should be able to pick a (finite) N such that if psi(n) for all 0<=n<=N, then the probability of “for all n psi(n)” is greater than or equal to .9. I don’t know how to find such an N. How do I know that the limit isn’t 0.8? Intuitively I feel like just checking more and more values of n should not get us arbitrarily close to certainty, but I don’t know how to justify that intuition rigorously. Infinities are weird. Possibly infinities give us different rules for certain mathematical truths, I don’t know. I would be curious to hear other people’s thoughts.
Eigel is asking a specific (purely mathematical!) question about “logical induction”, which is defined in the paper they linked to. Your comment seems to miss the question.
Is there a reason to think this would be different than any other kind of induction or Bayesian reasoning? We use probabilities to describe things for which there is a true answer that we happen not to know all the time. Probability is often (arguably always) subjective in that way. For example, what is the probability that you, Eigil Rischel, have any siblings? The answer, in an objective sense, is either 0 or 1. The answer, from your subjective perspective, is either very close to 0 or very close to 1. But from my perspective not knowing anything about you, I’m going to put it at 0.7. If I wanted a better estimate, I could actually look up what fraction of people have siblings and use that. If I wanted an even better estimate, I could ask you. But right now, from my perspective, the probability of you having siblings is 0.7. This seems straight forward for physical truths, I don’t see any real difference for mathematical truths. You should be able to use all the standard rules of probability theory, Bayes theorem, etc.
I’m unsure that your second bullet point follows. For that limit to work, I should be able to pick a (finite) N such that if psi(n) for all 0<=n<=N, then the probability of “for all n psi(n)” is greater than or equal to .9. I don’t know how to find such an N. How do I know that the limit isn’t 0.8? Intuitively I feel like just checking more and more values of n should not get us arbitrarily close to certainty, but I don’t know how to justify that intuition rigorously. Infinities are weird. Possibly infinities give us different rules for certain mathematical truths, I don’t know. I would be curious to hear other people’s thoughts.
Eigel is asking a specific (purely mathematical!) question about “logical induction”, which is defined in the paper they linked to. Your comment seems to miss the question.