You only need faith in two things: That “induction works” has a non-super-exponentially-tiny prior probability, and that some single large ordinal is well-ordered. Anything else worth believing in is a deductive consequence of one or both.
(Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works, even if you started by assigning it very tiny prior probability, so long as that prior probability is not super-exponentially tiny. Then induction on sensory data gives you all empirical facts worth believing in. Believing that a mathematical system has a model usually corresponds to believing that a certain computable ordinal is well-ordered (the proof-theoretic ordinal of that system), and large ordinals imply the well-orderedness of all smaller ordinals. So if you assign non-tiny prior probability to the idea that induction might work, and you believe in the well-orderedness of a single sufficiently large computable ordinal, all of empirical science, and all of the math you will actually believe in, will follow without any further need for faith.)
(The reason why you need faith for the first case is that although the fact that induction works can be readily observed, there is also some anti-inductive prior which says, ‘Well, but since induction has worked all those previous times, it’ll probably fail next time!’ and ‘Anti-induction is bound to work next time, since it’s never worked before!’ Since anti-induction objectively gets a far lower Bayes-score on any ordered sequence and is then demoted by the logical operation of Bayesian updating, to favor induction over anti-induction it is not necessary to start out believing that induction works better than anti-induction, it is only necessary *not* to start out by being *perfectly* confident that induction won’t work.)
(The reason why you need faith for the second case is that although more powerful proof systems—those with larger proof-theoretic ordinals—can prove the consistency of weaker proof systems, or equivalently prove the well-ordering of smaller ordinals, there’s no known perfect system for telling which mathematical systems are consistent just as (equivalently!) there’s no way of solving the halting problem. So when you reach the strongest math system you can be convinced of and further assumptions seem dangerously fragile, there’s some large ordinal that represents all the math you believe in. If this doesn’t seem to you like faith, try looking up a Buchholz hydra and then believing that it can always be killed.)
(Work is ongoing on eliminating the requirement for faith in these two remaining propositions. For example, we might be able to describe our increasing confidence in ZFC in terms of logical uncertainty and an inductive prior which is updated as ZFC passes various tests that it would have a substantial subjective probability of failing, even given all other tests it has passed so far, if ZFC were inconsistent.)
(No, this is *not* the “tu quoque!” moral equivalent of starting out by assigning probability 1 that Christ died for your sins.)
You only need faith in two things
You only need faith in two things: That “induction works” has a non-super-exponentially-tiny prior probability, and that some single large ordinal is well-ordered. Anything else worth believing in is a deductive consequence of one or both.
(Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works, even if you started by assigning it very tiny prior probability, so long as that prior probability is not super-exponentially tiny. Then induction on sensory data gives you all empirical facts worth believing in. Believing that a mathematical system has a model usually corresponds to believing that a certain computable ordinal is well-ordered (the proof-theoretic ordinal of that system), and large ordinals imply the well-orderedness of all smaller ordinals. So if you assign non-tiny prior probability to the idea that induction might work, and you believe in the well-orderedness of a single sufficiently large computable ordinal, all of empirical science, and all of the math you will actually believe in, will follow without any further need for faith.)
(The reason why you need faith for the first case is that although the fact that induction works can be readily observed, there is also some anti-inductive prior which says, ‘Well, but since induction has worked all those previous times, it’ll probably fail next time!’ and ‘Anti-induction is bound to work next time, since it’s never worked before!’ Since anti-induction objectively gets a far lower Bayes-score on any ordered sequence and is then demoted by the logical operation of Bayesian updating, to favor induction over anti-induction it is not necessary to start out believing that induction works better than anti-induction, it is only necessary *not* to start out by being *perfectly* confident that induction won’t work.)
(The reason why you need faith for the second case is that although more powerful proof systems—those with larger proof-theoretic ordinals—can prove the consistency of weaker proof systems, or equivalently prove the well-ordering of smaller ordinals, there’s no known perfect system for telling which mathematical systems are consistent just as (equivalently!) there’s no way of solving the halting problem. So when you reach the strongest math system you can be convinced of and further assumptions seem dangerously fragile, there’s some large ordinal that represents all the math you believe in. If this doesn’t seem to you like faith, try looking up a Buchholz hydra and then believing that it can always be killed.)
(Work is ongoing on eliminating the requirement for faith in these two remaining propositions. For example, we might be able to describe our increasing confidence in ZFC in terms of logical uncertainty and an inductive prior which is updated as ZFC passes various tests that it would have a substantial subjective probability of failing, even given all other tests it has passed so far, if ZFC were inconsistent.)
(No, this is *not* the “tu quoque!” moral equivalent of starting out by assigning probability 1 that Christ died for your sins.)