Yeah, it makes sense to study perfect theories separately from approximations. That applies to probability theory too.
Imagine there’s a prophet who predicts tomorrow’s weather and is always right. Probability theory says you should have a prior over the prophet’s possible mechanisms before you decide to trust him. But your mind is too small for that, and after seeing the prophet succeed for ten years in a row you’ll start trusting him anyway. That comes from something different from probabilistic reasoning, let’s call it “approximate reasoning”.
Why is that relevant? Well, I think logical induction is in the same boat. Solving logical uncertainty doesn’t mean finding an analogue of perfect probability theory that would work in logic: we already have that, it’s called logic! Instead, solving logical uncertainty is finding an analogue of approximate reasoning that would work in logic. It feels like a messy problem for the same reason that approximate reasoning with ordinary probabilities is messy. So if we ever find the right theory of reasoning when your mind is too small to have a prior over everything, it might well apply to both ordinary and logical uncertainty, and sit on top of probability instead of looking like probability.
Yeah, it makes sense to study perfect theories separately from approximations. That applies to probability theory too.
Imagine there’s a prophet who predicts tomorrow’s weather and is always right. Probability theory says you should have a prior over the prophet’s possible mechanisms before you decide to trust him. But your mind is too small for that, and after seeing the prophet succeed for ten years in a row you’ll start trusting him anyway. That comes from something different from probabilistic reasoning, let’s call it “approximate reasoning”.
Why is that relevant? Well, I think logical induction is in the same boat. Solving logical uncertainty doesn’t mean finding an analogue of perfect probability theory that would work in logic: we already have that, it’s called logic! Instead, solving logical uncertainty is finding an analogue of approximate reasoning that would work in logic. It feels like a messy problem for the same reason that approximate reasoning with ordinary probabilities is messy. So if we ever find the right theory of reasoning when your mind is too small to have a prior over everything, it might well apply to both ordinary and logical uncertainty, and sit on top of probability instead of looking like probability.