To a Bayesian, the problem of induction comes down to justifying your priors. If your priors rate an orderly universe as no more likely than a disorderly one, than all the evidence of regularity in the past is no reason to expect regularity in the future—all futures are still equally likely. Only with a prior that weights more orderly universes with a higher probability, as Solomonoff’s universal prior does, will you be able to use the past to make predictions.
More than that, surely: inductive inference is also built into Bayes’ theorem itself.
Unless the past is useful as a guide to the future, the whole concept of maintaining a model of the world and updating it when new evidence arrives becomes worthless.
inductive inference is also built into Bayes’ theorem itself
As you say, Bayes’ theroem isn’t useful if you start from a “flat” prior; all posterior probabilities come out the same as prior probabilities, at least if A is in the future and B in the past. But nothing in Bayes’ theorem itself says that it has to be useful.
To a Bayesian, the problem of induction comes down to justifying your priors. If your priors rate an orderly universe as no more likely than a disorderly one, than all the evidence of regularity in the past is no reason to expect regularity in the future—all futures are still equally likely. Only with a prior that weights more orderly universes with a higher probability, as Solomonoff’s universal prior does, will you be able to use the past to make predictions.
More than that, surely: inductive inference is also built into Bayes’ theorem itself.
Unless the past is useful as a guide to the future, the whole concept of maintaining a model of the world and updating it when new evidence arrives becomes worthless.
As you say, Bayes’ theroem isn’t useful if you start from a “flat” prior; all posterior probabilities come out the same as prior probabilities, at least if A is in the future and B in the past. But nothing in Bayes’ theorem itself says that it has to be useful.