Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1.
Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70′s, so it dates at least forty years back.
That said, I’m not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning
Interesting. This might be somewhat off topic, but I’m curious how would such an Bayesian analysis of mathematical knowledge explain the fact that it is provable that any number of randomly selected real numbers are non-computable with a probability 1, yet this is not equivalent to a proof that all real numbers are non-computable. The real numbers 1, 1.4, square root 2, pi, etc are all computable numbers, although the probability of such numbers occurring in an empirical sample of the domain is zero.
So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I’m not quite sure of the official answer to that question.
On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on 2ℵ0 supports the atomic elements.
And if you’re willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define infinitesimal quantities
I don’t know enough math to understand your response. However, from the bits I can understand, it seems leave open the epistemic issue of needing an account of demostrative knowledge that is not dependent on Bayesian probability.
Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1.
Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70′s, so it dates at least forty years back.
That said, I’m not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning
Interesting. This might be somewhat off topic, but I’m curious how would such an Bayesian analysis of mathematical knowledge explain the fact that it is provable that any number of randomly selected real numbers are non-computable with a probability 1, yet this is not equivalent to a proof that all real numbers are non-computable. The real numbers 1, 1.4, square root 2, pi, etc are all computable numbers, although the probability of such numbers occurring in an empirical sample of the domain is zero.
So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I’m not quite sure of the official answer to that question.
On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on 2ℵ0 supports the atomic elements.
And if you’re willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define infinitesimal quantities
I don’t know enough math to understand your response. However, from the bits I can understand, it seems leave open the epistemic issue of needing an account of demostrative knowledge that is not dependent on Bayesian probability.