Successful use would count as evidence for the laws of probabilities providing “good” values right? So if we use these laws quite a bit and they always work, we might have P(Laws of Probability do what we think they do) = .99999
We could discount our output using this. We could also be more constructive and discount based on the complexity of the derivation using the principle “long proofs are less likely to be correct” in the following way: Each derivation can be done in terms of combinations of various sub-derivations so we could get probability bounds for new, longer derivations from our priors over other derivations from which it is assembled. (derivations being the general form of the computation rather than the value specific one).
ETA: Wait, were you sort of diagonalizing on Bayes Theorem because we need to use that to update P(Bayes Theorem)? If so I might have misread you.
Successful use would count as evidence for the laws of probabilities providing “good” values right? So if we use these laws quite a bit and they always work, we might have P(Laws of Probability do what we think they do) = .99999 We could discount our output using this. We could also be more constructive and discount based on the complexity of the derivation using the principle “long proofs are less likely to be correct” in the following way: Each derivation can be done in terms of combinations of various sub-derivations so we could get probability bounds for new, longer derivations from our priors over other derivations from which it is assembled. (derivations being the general form of the computation rather than the value specific one).
ETA: Wait, were you sort of diagonalizing on Bayes Theorem because we need to use that to update P(Bayes Theorem)? If so I might have misread you.