What is logical induction’s take on probabilistic algorithms? That should be the easiest test-case.
Say, before “PRIME is in P”, we had perfectly fine probabilistic algorithms for checking primality. A good theory of mathematical logic with uncertainty should permit us to use such an algorithm, without random oracle, for things you place as “logical uncertainty”. As far as I understood, the typical mathematician’s take is to just ignore this foundational issue and do what’s right (channeling Thurston: Mathematicians are in the business of producing human understanding, not formal proofs).
Logical induction does not take the outputs of randomized algorithms into account. But it does listen to deterministic algorithms that are defined by taking a randomized algorithm but making it branch pseudo-randomly instead of randomly. Because of this, I expect that modifying logical induction to include randomized algorithms would not lead to a significant gain in performance.
What is logical induction’s take on probabilistic algorithms? That should be the easiest test-case.
Say, before “PRIME is in P”, we had perfectly fine probabilistic algorithms for checking primality. A good theory of mathematical logic with uncertainty should permit us to use such an algorithm, without random oracle, for things you place as “logical uncertainty”. As far as I understood, the typical mathematician’s take is to just ignore this foundational issue and do what’s right (channeling Thurston: Mathematicians are in the business of producing human understanding, not formal proofs).
Logical induction does not take the outputs of randomized algorithms into account. But it does listen to deterministic algorithms that are defined by taking a randomized algorithm but making it branch pseudo-randomly instead of randomly. Because of this, I expect that modifying logical induction to include randomized algorithms would not lead to a significant gain in performance.