It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.