What should a Bayesian do given probability of proving X vs. of disproving X?

Consider some disputed proposition X. Suppose there appeared to be a limited number of ways of proving and of disproving X. No one has yet constructed a proof or disproof, but you have a feeling for how likely it is that someone will.

For instance, take Fermat’s Last Theorem or the 4-color problem. For each of them, at one point in time, there was no proof, but people had some sense of the prior probability of observing the lack of a counterexample given the space searched so far. They could use that to assign a probability of there being a counterexample (and hence a disproof) [1]. Later, there was an alleged proof, and people could estimate the probability that the proof was correct based on the reputation of the prover and the approach used. At that point, people could assign values to both P(will_be_proven(X)) and P(will_be_disproven(X)).

Is it reasonable to assign P(X) = P(will_be_proven(X)) /​ (P(will_be_proven(X)) + P(will_be_disproven(X))) ?

If so, consider X = “free will exists”. One could argue that the term “free will” is defined such that it is impossible to detect it, or to prove that it exists. But if one could prove that the many worlds interpretation of quantum mechanics is correct, that would constitute a disproof of X. Then P(will_be_proven(X)) /​ (P(will_be_proven(X)) + P(will_be_disproven(X))) = 0.

Is it possible for this to happen when you know that X is not undecidable? If so, what do you do then?

1. The computation is not as simple as it might appear, because you need to adjust for the selection effect of mathematicians being interested only in conjectures with no counterexamples.