I’m not saying that they can’t be solved, but there are self-reference problems when you start doing this; it seems for consistency you should assign probabilities to the accuracy of your probability calculations, inviting vicious regress.
There are a number of things one could do with a nice logical prior:
One could use it as a model for what a mathematician does when they search for examples in order to evaluate the plausibility of a Pi_1 conjecture (like the twin prime conjecture). One could accept the model as normative and argue that mathematicians conform to it insofar as their intuitions obey Cox’s theorem; or one could argue that the model is not normative because arithmetic sentences aren’t necessarily true or false / don’t have “aboutness”.
One could use it to define a computable maximizing agent as follows: The agent works within a theory T that is powerful enough to express arithmetic and has symbols that are meant to refer to the agent and aspects of the world. The agent maximizes the expectation of some objective function with respect to conditional probabilities of the form P(“outcome O obtains” | “I perform action A, and the theory T is true”). Making an agent like this has some advantages:
It would perform well if placed into an environment where the outcomes of its actions depend on the behavior of powerful computational processes (such as a supercomputer that dispenses pellets of utilitronium if it can find a twin prime).
More specifically, it would perform well if placed into an environment that contains multiple computable agents.
Hmm, most of this went way over my head, unfortunately. I have no problem understanding probability in statements like “There is a 0.1% chance of the twin prime conjecture being proven in 2014”, because it is one of many similar statements that can be bet upon, with a well-calibrated predictor coming out ahead on average. Is the statement “the twin prime conjecture is true with 99% probability” a member of some set of statements a well calibrated agent can use to place bets and win?
For that purpose a better example is a computationally difficult statement, like “There are at least X twin primes below Y”. We could place bets, and then acquire more computing power, and then resolve bets.
The mathematical theory of statements like the twin primes conjecture should be essentially the same, but simpler.
If you’re a subjective Bayesian, I think it’s equally appropriate to assign probabilities to arithmetical statements as to contingent propositions.
I’m not saying that they can’t be solved, but there are self-reference problems when you start doing this; it seems for consistency you should assign probabilities to the accuracy of your probability calculations, inviting vicious regress.
Even for a subjective Bayesian the “personal belief” must be somehow connectable to “reality”, no?
There are a number of things one could do with a nice logical prior:
One could use it as a model for what a mathematician does when they search for examples in order to evaluate the plausibility of a Pi_1 conjecture (like the twin prime conjecture). One could accept the model as normative and argue that mathematicians conform to it insofar as their intuitions obey Cox’s theorem; or one could argue that the model is not normative because arithmetic sentences aren’t necessarily true or false / don’t have “aboutness”.
One could use it to define a computable maximizing agent as follows: The agent works within a theory T that is powerful enough to express arithmetic and has symbols that are meant to refer to the agent and aspects of the world. The agent maximizes the expectation of some objective function with respect to conditional probabilities of the form P(“outcome O obtains” | “I perform action A, and the theory T is true”). Making an agent like this has some advantages:
It would perform well if placed into an environment where the outcomes of its actions depend on the behavior of powerful computational processes (such as a supercomputer that dispenses pellets of utilitronium if it can find a twin prime).
More specifically, it would perform well if placed into an environment that contains multiple computable agents.
Hmm, most of this went way over my head, unfortunately. I have no problem understanding probability in statements like “There is a 0.1% chance of the twin prime conjecture being proven in 2014”, because it is one of many similar statements that can be bet upon, with a well-calibrated predictor coming out ahead on average. Is the statement “the twin prime conjecture is true with 99% probability” a member of some set of statements a well calibrated agent can use to place bets and win?
For that purpose a better example is a computationally difficult statement, like “There are at least X twin primes below Y”. We could place bets, and then acquire more computing power, and then resolve bets.
The mathematical theory of statements like the twin primes conjecture should be essentially the same, but simpler.
Sure; bet on mathematical conjectures, and collect when they are resolved one way or the other.