However, Polya’s notion of being a good guesser has been formalised as Bayesian reasoning. If a theorem prover, facing a fork in its search tree, uses Bayes Theorem to pick the most likely branch first, it may find much deeper results.
I don’t think there is currently much overlap between those working on theorem proving and those working on Bayesian statistics. There is perhaps even a clash of temperaments between those who seek absolute certainty and those who seek an edge in the messy process of muddling through. Nevertheless I foresee great possibilities of using Bayesian reasoning to guide the internal search of theorem provers, thus giving them a sense of direction.
What I question is whether the Bayesian reasoning algorithms are at all computationally feasible to implement to solve nontrivial problems. See the final two paragraphs of my comment here about computational complexity. Do you have evidence in the other direction?
No. I think one typically has to come up with a brutally truncated approximation to actually Bayesian reasoning. For example, if you have n propositions, instead of considering all 2^ n basic conjunctions, ones first idea is to assume that they are all independent. Typically that is a total failure; the independence assumption abolishes the very interactions that were of interest. So one might let proposition n depend on proposition n-1 and reinvent Markov models.
I don’t see much hope of being able to anticipate which, if any, crude approximations to Bayesian reason are going to work well enough. One just has to try it and see. I don’t think that my comment goes any deeper than saying that there are lots of close to practical things due to be tried soon, so I expect one or two pleasant surprises.
What I question is whether the Bayesian reasoning algorithms are at all computationally feasible to implement to solve nontrivial problems. See the final two paragraphs of my comment here about computational complexity. Do you have evidence in the other direction?
No. I think one typically has to come up with a brutally truncated approximation to actually Bayesian reasoning. For example, if you have n propositions, instead of considering all 2^ n basic conjunctions, ones first idea is to assume that they are all independent. Typically that is a total failure; the independence assumption abolishes the very interactions that were of interest. So one might let proposition n depend on proposition n-1 and reinvent Markov models.
I don’t see much hope of being able to anticipate which, if any, crude approximations to Bayesian reason are going to work well enough. One just has to try it and see. I don’t think that my comment goes any deeper than saying that there are lots of close to practical things due to be tried soon, so I expect one or two pleasant surprises.
Ok, thanks for the clarification