If the listener is running a computable logical uncertainty algorithm, then for a difficult proposition it hasn’t made much sense of, the listener might say “70% likely it’s a theorem and X will say it, 20% likely it’s not a theorem and X won’t say it, 5% PA is inconsistent and X will say both, 5% X isn’t naming all and only theorems of PA”.
Conditioned on PA being consistent and on X naming all and only theorems of PA, and on the listener’s logical uncertainty being well-calibrated, you’d expect that in 78% of such cases X eventually names it.
But you can’t use the listener’s current probabilities on [X saying it] to sort out theorems from non-theorems in a way that breaks computability!
If the listener is running a computable logical uncertainty algorithm, then for a difficult proposition it hasn’t made much sense of, the listener might say “70% likely it’s a theorem and X will say it, 20% likely it’s not a theorem and X won’t say it, 5% PA is inconsistent and X will say both, 5% X isn’t naming all and only theorems of PA”.
Conditioned on PA being consistent and on X naming all and only theorems of PA, and on the listener’s logical uncertainty being well-calibrated, you’d expect that in 78% of such cases X eventually names it.
But you can’t use the listener’s current probabilities on [X saying it] to sort out theorems from non-theorems in a way that breaks computability!
What am I missing?