Without induction, there is no Bayes’s theorem, because the very concept of evidence presupposes induction.
I strongly disagree. Bayes theorem is a theorem of mathematics. It does not presuppose induction. See, for example, Jaynes, where Bayes’s theorem is established in the first couple chapters and then used throughout the book. Induction, on the other hand, is something which Jaynes is a little puzzled by. He thinks the justification of induction is related to the justification of MAXENT priors, and he thinks that both can be rationally justified, but neither is iron-clad like Bayes theorem.
As Unknown’s post points out, given some priors, your updating in response to evidence is induction-like, whereas given other priors, your updating may appear contrary to induction. But Bayes is applicable in both cases.
So how do we characterize this difference in priors? One thing we can say is that naive induction works (to some extent) whenever our prior regarding a population is such that a sample from the population provides information about the population.
When sampling without replacement from an urn which we know a priori contains 5 white and 5 red balls, we are in an anti-inductive situation. Sampling tells us nothing about the population—we already know (a priori) everything there is to know about the population. So if we draw a white ball, Bayes theorem tells us to reduce the probability that the next ball will be white.
But when sampling from an urn where our prior regarding the urn is something less well informed, sampling works better. When our prior is not well informed at all—when it is MAXENT—then induction works correctly. Each white ball drawn increases the probability that the next draw will be white.
So, according to Jaynes, the justification of induction is equivalent to the justification of using MAXENT priors in cases of no information. Not quite as well-founded in reason as Bayes theorem, but still pretty reasonable.
It occurs to me that not only is Bayes theorem more obviously correct than induction, it is also more general than induction.
Bayes theorem applies to all cases of updating beliefs upon receipt of evidence.
Induction is limited to a subset collection of cases—specifically those cases in which we wish to update our beliefs about a population using evidence which consists of a sample drawn from that population.
Edit to reply to your edit: Yes, I think that it is true that for many problems Bayes theorem isn’t useful, and that for all problems where induction works, it is the fact that induction does work that makes Bayes theorem useful. These are all cases of updating based on a sample from a population. But there clearly are also problems where Bayesian reasoning is useful, but induction just doesn’t apply. Problems where there is no population and no sample, but you do have an informative prior. Problems like Jaynes’s burglar alarm.
I strongly disagree. Bayes theorem is a theorem of mathematics. It does not presuppose induction. See, for example, Jaynes, where Bayes’s theorem is established in the first couple chapters and then used throughout the book. Induction, on the other hand, is something which Jaynes is a little puzzled by. He thinks the justification of induction is related to the justification of MAXENT priors, and he thinks that both can be rationally justified, but neither is iron-clad like Bayes theorem.
As Unknown’s post points out, given some priors, your updating in response to evidence is induction-like, whereas given other priors, your updating may appear contrary to induction. But Bayes is applicable in both cases.
So how do we characterize this difference in priors? One thing we can say is that naive induction works (to some extent) whenever our prior regarding a population is such that a sample from the population provides information about the population.
When sampling without replacement from an urn which we know a priori contains 5 white and 5 red balls, we are in an anti-inductive situation. Sampling tells us nothing about the population—we already know (a priori) everything there is to know about the population. So if we draw a white ball, Bayes theorem tells us to reduce the probability that the next ball will be white.
But when sampling from an urn where our prior regarding the urn is something less well informed, sampling works better. When our prior is not well informed at all—when it is MAXENT—then induction works correctly. Each white ball drawn increases the probability that the next draw will be white.
So, according to Jaynes, the justification of induction is equivalent to the justification of using MAXENT priors in cases of no information. Not quite as well-founded in reason as Bayes theorem, but still pretty reasonable.
It occurs to me that not only is Bayes theorem more obviously correct than induction, it is also more general than induction.
Bayes theorem applies to all cases of updating beliefs upon receipt of evidence.
Induction is limited to a subset collection of cases—specifically those cases in which we wish to update our beliefs about a population using evidence which consists of a sample drawn from that population.
Edit to reply to your edit: Yes, I think that it is true that for many problems Bayes theorem isn’t useful, and that for all problems where induction works, it is the fact that induction does work that makes Bayes theorem useful. These are all cases of updating based on a sample from a population. But there clearly are also problems where Bayesian reasoning is useful, but induction just doesn’t apply. Problems where there is no population and no sample, but you do have an informative prior. Problems like Jaynes’s burglar alarm.