if you’re not sure whether the twin prime conjecture is true, then each time you discover a new twin prime larger than all that you have seen before, you should ever so slightly increase the probability you assign to the conjecture.
I do not understand what you mean by “probability” here. Suppose I use one criterion to estimate that the twin-prime conjecture is true with probability 0.99, but a different criterion gives me 0.9999. In what situation would my choice of the criterion matter?
Are we talking about some measure over many (equally?) possible worlds in some of which the TPC is true and in others false (or maybe unprovable)? What would I do differently if I am convinced that one criterion is “right” and the other is “wrong” vs the other way around? Would I spend more time trying to prove the conjecture if I thought it is more likely true, or something?
I’m not saying that they can’t be solved, but there are self-reference problems when you start doing this; it seems for consistency you should assign probabilities to the accuracy of your probability calculations, inviting vicious regress.
There are a number of things one could do with a nice logical prior:
One could use it as a model for what a mathematician does when they search for examples in order to evaluate the plausibility of a Pi_1 conjecture (like the twin prime conjecture). One could accept the model as normative and argue that mathematicians conform to it insofar as their intuitions obey Cox’s theorem; or one could argue that the model is not normative because arithmetic sentences aren’t necessarily true or false / don’t have “aboutness”.
One could use it to define a computable maximizing agent as follows: The agent works within a theory T that is powerful enough to express arithmetic and has symbols that are meant to refer to the agent and aspects of the world. The agent maximizes the expectation of some objective function with respect to conditional probabilities of the form P(“outcome O obtains” | “I perform action A, and the theory T is true”). Making an agent like this has some advantages:
It would perform well if placed into an environment where the outcomes of its actions depend on the behavior of powerful computational processes (such as a supercomputer that dispenses pellets of utilitronium if it can find a twin prime).
More specifically, it would perform well if placed into an environment that contains multiple computable agents.
Hmm, most of this went way over my head, unfortunately. I have no problem understanding probability in statements like “There is a 0.1% chance of the twin prime conjecture being proven in 2014”, because it is one of many similar statements that can be bet upon, with a well-calibrated predictor coming out ahead on average. Is the statement “the twin prime conjecture is true with 99% probability” a member of some set of statements a well calibrated agent can use to place bets and win?
For that purpose a better example is a computationally difficult statement, like “There are at least X twin primes below Y”. We could place bets, and then acquire more computing power, and then resolve bets.
The mathematical theory of statements like the twin primes conjecture should be essentially the same, but simpler.
Agree with Nisan’s intuition, though I also agree with Wei Dai’s position that we shouldn’t feel sure that Bayesian probability is the right way to handle logical uncertainty. To more directly answer the question what it means to assign a probability to the twin prime conjecture: If Omega reveals to you that you live in a simulation, and it offers you a choice between (a) Omega throws a bent coin which has probability p of landing heads, and shuts down the simulation if it lands tails, otherwise keeps running it forever; and (b) Omega changes the code of the simulation to search for twin primes and run for one more step whenever it finds one; then you should be indifferent between (a) and (b) iff you assign probability p to the twin prime conjecture. [ETA: Argh, ok, sorry, not quite, because in (b) you may get to run for a long time still before getting shut down—but you get the idea of what a probability over logical statements should mean.]
but you get the idea of what a probability over logical statements should mean
Not from your example, I do not. I suspect that if you remove this local Omega meme, you are saying that there are many different possible worlds in your inner simulator and in p*100% of them the conjecture ends up being proven… some day before that world ends. Unless you are a Platonist and assign mathematical “truths” independent immaterial existence.
I do not understand what you mean by “probability” here. Suppose I use one criterion to estimate that the twin-prime conjecture is true with probability 0.99, but a different criterion gives me 0.9999. In what situation would my choice of the criterion matter?
Are we talking about some measure over many (equally?) possible worlds in some of which the TPC is true and in others false (or maybe unprovable)? What would I do differently if I am convinced that one criterion is “right” and the other is “wrong” vs the other way around? Would I spend more time trying to prove the conjecture if I thought it is more likely true, or something?
If you’re a subjective Bayesian, I think it’s equally appropriate to assign probabilities to arithmetical statements as to contingent propositions.
I’m not saying that they can’t be solved, but there are self-reference problems when you start doing this; it seems for consistency you should assign probabilities to the accuracy of your probability calculations, inviting vicious regress.
Even for a subjective Bayesian the “personal belief” must be somehow connectable to “reality”, no?
There are a number of things one could do with a nice logical prior:
One could use it as a model for what a mathematician does when they search for examples in order to evaluate the plausibility of a Pi_1 conjecture (like the twin prime conjecture). One could accept the model as normative and argue that mathematicians conform to it insofar as their intuitions obey Cox’s theorem; or one could argue that the model is not normative because arithmetic sentences aren’t necessarily true or false / don’t have “aboutness”.
One could use it to define a computable maximizing agent as follows: The agent works within a theory T that is powerful enough to express arithmetic and has symbols that are meant to refer to the agent and aspects of the world. The agent maximizes the expectation of some objective function with respect to conditional probabilities of the form P(“outcome O obtains” | “I perform action A, and the theory T is true”). Making an agent like this has some advantages:
It would perform well if placed into an environment where the outcomes of its actions depend on the behavior of powerful computational processes (such as a supercomputer that dispenses pellets of utilitronium if it can find a twin prime).
More specifically, it would perform well if placed into an environment that contains multiple computable agents.
Hmm, most of this went way over my head, unfortunately. I have no problem understanding probability in statements like “There is a 0.1% chance of the twin prime conjecture being proven in 2014”, because it is one of many similar statements that can be bet upon, with a well-calibrated predictor coming out ahead on average. Is the statement “the twin prime conjecture is true with 99% probability” a member of some set of statements a well calibrated agent can use to place bets and win?
For that purpose a better example is a computationally difficult statement, like “There are at least X twin primes below Y”. We could place bets, and then acquire more computing power, and then resolve bets.
The mathematical theory of statements like the twin primes conjecture should be essentially the same, but simpler.
Sure; bet on mathematical conjectures, and collect when they are resolved one way or the other.
Agree with Nisan’s intuition, though I also agree with Wei Dai’s position that we shouldn’t feel sure that Bayesian probability is the right way to handle logical uncertainty. To more directly answer the question what it means to assign a probability to the twin prime conjecture: If Omega reveals to you that you live in a simulation, and it offers you a choice between (a) Omega throws a bent coin which has probability p of landing heads, and shuts down the simulation if it lands tails, otherwise keeps running it forever; and (b) Omega changes the code of the simulation to search for twin primes and run for one more step whenever it finds one; then you should be indifferent between (a) and (b) iff you assign probability p to the twin prime conjecture. [ETA: Argh, ok, sorry, not quite, because in (b) you may get to run for a long time still before getting shut down—but you get the idea of what a probability over logical statements should mean.]
Not from your example, I do not. I suspect that if you remove this local Omega meme, you are saying that there are many different possible worlds in your inner simulator and in p*100% of them the conjecture ends up being proven… some day before that world ends. Unless you are a Platonist and assign mathematical “truths” independent immaterial existence.
Retracted my comment for being unhelpful (I don’t recognize what I said in what you heard, so I’m clearly not managing to explain myself here).
Thanks for trying, anyway :)