As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
I slipped. It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½?
If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.
It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P($n$th digit is even) > P(axioms) or P($n$ digit is odd) > P(axioms).
I don’t think that’s valid—even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don’t have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. “A fact-of-the-matter exists for statement A” is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).
Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you’re confusing map and territory.
I slipped. It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
I don’t think that’s valid—even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don’t have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. “A fact-of-the-matter exists for statement A” is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).
Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you’re confusing map and territory.