It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems.
Not so! From most interesting formal systems there are exponentially many possible deductions that can be made. All known agents can only make a bounded number of possible deductions. A widely believed conjecture implies that longer-lived agents could only make a polynomial number of possible deductions.
Um, that’s not quite what I meant… if I prove a theorem you’re unlikely to prove its negation, and if I tell you I have a proof of a theorem you’re likely to be able to find one. Moreover, in the latter case, if you’re not able to find a proof, then either I am mistaken in my belief that I have a proof, or you haven’t looked hard enough (and in principle could find a proof if you did). That doesn’t mean that every agent with “I think therefore I am” will deduce the existence of rice pudding and income tax, it only means that if one agent can deduce it, the others in principle can do the same. For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming ‘proves’ is defined by ‘a proof of finite length exists’, as happens in Gödel-numbering / PA “box”)
Um, that’s not quite what I meant… if I prove a theorem you’re unlikely to prove its negation,
As you know, I am outside of mainstream opinion on this. But this:
and if I tell you I have a proof of a theorem you’re likely to be able to find one.
is flatly wrong if P <> NP and not controversially so. If I reach a point in a large search space there is no guarantee that you can reach the same point in a feasible amount of time.
For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming ‘proves’ is defined by ‘a proof of finite length exists’, as happens in Gödel-numbering / PA “box”)
Recursively enumerable sure. Not feasibly enumerable.
But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible… is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?
Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn’t, I’d be very surprised.) Consequently, there is something common.
Similarly, no matter how low your prior probability for “PA is consistent”, so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.
But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof
Incidentally but importantly, lengthiness is not expected to be the only obstacle to finding a proof. Cryptography depends on this.
As to why feasibility matters: it’s because we have limited resources. You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources. If you wish to convince people to be less skeptical of infinity (your stated intention), you will have to take feasibility into account or else make a circular argument.
But saying that things are true only by virtue of their proof being feasible… is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?
I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.
Similarly, no matter how low your prior probability for “PA is consistent”, so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.
Not so. If I have P(PA will be shown inconsistent in fewer than m minutes) = p, then I also have P(I will prove the negation of your theorem in fewer than m+1 minutes) = p. Your ability to prove things doesn’t enter into it.
lengthiness is not expected to be the only obstacle to finding a proof
True; stick a ceteris paribus in there somewhere.
You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources.
Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.
I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.
So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?
P(I will prove the negation of your theorem in fewer than m+1 minutes) = p
No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don’t know whether my proof was correct.
Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards.
A positive but minuscule amount. This is how cryptography works! In less than a minute (aided by my very old laptop), I gave a proof of the following theorem: the second digit of each of each of the prime factors of n is 6, where
n = 44289087508518650246893852937476857335929624072788480361
It would take you much longer to find a proof (even though the proof is very short!).
But that isn’t what either of us said. You mentioned P(you can find a proof). I am telling you (telling you, modulo standard open problems) that this can be very small even after another agent has found a proof. This is a standard family of topics in computer science.
I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that “Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems” was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount—from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it—a step I have replaced—so does that not screen off my conclusion from that step—so why are you still arguing with me?
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount—from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists.
No. “I can find a proof in time t” is a definite event whose probability maybe can be measured (with difficulty!). “A proof exists” is a much murkier statement and it is much more difficult to discuss its probability. (For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1. Such a consistent prior is an oracle!)
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
I wasn’t suggesting you were trying to get on my nerves. I just think we’re talking past each other.
“A proof exists” is a much murkier statement and it is much more difficult to discuss its probability.
As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½? But then if the digit is X, then surely a proof exists that it is X (because, in principle, the digit can be computed in finitely many steps); it must be some X in [[:digit:]], so if it is even a proof exists that it is even; otherwise (sharp swerve) one does not, and P=½. Not sure about that sharp swerve; if I condition all my probabilities on |arithmetic is consistent) then it’s ok. But then, assuming I actually need to do so, the probabilities would be different if conditioned on |arithmetic is inconsistent), and thus by finding a proof, you find evidence for or against the assertion that arithmetic is consistent. But things you can find evidence on, exist! (They are the sheep that determine your pebbles.) So where did I go wrong? (Did I slip a meta-level somewhere? It’s possible; I was juggling them a bit.)
As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
I slipped. It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½?
If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.
It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P($n$th digit is even) > P(axioms) or P($n$ digit is odd) > P(axioms).
I don’t think that’s valid—even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don’t have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. “A fact-of-the-matter exists for statement A” is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).
Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you’re confusing map and territory.
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
Suggestions of dire wrongness? Bah. Now I’m all tempted to read through the context. But so far I’ve failed to see the point!
Wait. “A proof exists” is murky? What sort of bizarre twisted mathematical reasoning is behind that? (Not that I’m saying mathematics doesn’t say equally bizarre and twisted things!)
In the set of all mathematical things you can say either there is one that constitutes a proof or there isn’t. That doesn’t mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.
For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1.
Counter assertion: Yes there is. It’s just mundane logical uncertainty. But even if you decide you’re not allowed to assign probabilities to logical unknowns (making the whole exercise tautologically pointless) that doesn’t make “a proof exists” murky. If anything it makes it arbitrarily more clear cut. It means you’re allowed to say ‘a proof exists’ or the reverse claim but have forbidden yourself the chance to assign probabilities to the possibility. And not allowing yourself to think about it doesn’t do anything to change whether the thing exists or not.
“A proof exists, and here it is” is not at all murky. “A proof exists, I’m close to one and will show it to you soon” is not at all murky. Either statement could be true or false, and testably so. In the first case, it is just a matter of checking. In the second case, it is just a matter of waiting.
“A proof exists” full stop is in much worse shape. It has no testable consequences.
In the set of all mathematical things you can say either there is one that constitutes a proof or there isn’t. That doesn’t mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.
Certainly if “the set of all mathematical things” (even “the set of all mathematical proofs”) has some Platonic existence, then EC429 is in much better shape. But this is exactly the kind of thing that any “infinite set atheist” would be unwilling to take for granted.
“A proof exists” full stop is in much worse shape. It has no testable consequences.
It has rather more testable consequences than an item that has moved out of my future light cone. Such things don’t stop existing or not existing because I personally am not able to test them. Because the universe isn’t about me any more than mathematical propositions are.
It’s just mundane logical uncertainty.
Just! Mundane!
Yes. If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It’s rather close to useless in a whole lot of situations that matter. Mine is better. To illustrate that I don’t even need to construct a scenario involving Omega, bets and saving the lives of cute little puppies. It is probably the sort of thing that could make it possible to construct an AI more easily or more effectively than yours. (I’m not certain about that—but that works almost as well as an illustration.)
It has rather more testable consequences than an item that has moved out of my future light cone.
Rather less, i.e. none. You at least have memories of something that’s moved out of your future light cone.
If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It’s rather close to useless in a whole lot of situations that matter.
What did I say that made you think I don’t regard logical uncertainty as an important problem? But you’re bluffing if you’re claiming to have a solution to this problem. Anyway any serious approach to logical uncertainty has to grapple with the fact that we have limited resources for computation (e.g. for computing probabilities!).
Rather less, i.e. none. You at least have memories of something that’s moved out of your future light cone.
No I don’t. I remember hardly anything about stuff that has passed out of my future light cone. Especially about small dark objects. And since actual information is lost when stuff goes out of my future light cone some of those objects no longer have enough evidence about them in the entire light cone to be able to reconstruct what they are. Although I suppose there is a probability distribution over possible combinations of object-going-out-of-the-light-cone that could be constructed based on remaining evidence.
Things existing has nothing to do with me being able to detect them or to remember them or to in principle be able to deduce their existence based on evidence.
As for “a proof exists”, that is something that can sometimes be proven to exist or not exist. I’m told that sometimes that you can even prove that a proof exists without having a proof. Which seems to rely on a bizarre definition of proof but hey, mathematicians are into that sort of thing.
My position: Any ways of defining terms such that assigning probabilities to “a proof exists” is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.
What did I say that made you think I don’t regard logical uncertainty as an important problem? But you’re bluffing if you’re claiming to have a solution to this problem.
I’m not bluffing anything. Nor am I writing a treatise on logical uncertainty. But I’ll tell you this: If I’m locked in a room, someone reliable generates a random 40 digit number from a uniform distribution right in front of me and I’m not given a computer and I’m asked to bet at even odds that it is not a prime number I am going to take that bet. Because it probably isn’t a prime number. That’s right, probably not.
It’s not because I’m clever and prestigous and am claiming to have deep wisdom about logical uncertainty. It’s because I’m not am not stupid and I assign probabilities to stuff so I can win. If being impressive and insightful requires me to only assign 0% and 100% to things like that then I don’t want to be impressive and insightful. Because that is literally for losers.
There’s very little in that I disagree with. In particular I think “winning” is a fine summary of what probability is for. But there is nothing for you to win when assigning a probability to a claim that has no testable consequences—winning is a testable consequence.
My position: Any ways of defining terms such that assigning probabilities to “a proof exists” is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.
Serious question: what’s special about the phrase “a proof exists” that requires you to assign a probability to it? Presumably there are some grammatically correct assertions that you don’t feel should have numbers attached to them.
I don’t consider “can have probabilities assigned” to be an exception that requires a special case. It gets treated the same as any other logical uncertainty. You can either handle logical uncertainty or you can’t.
Then what is it about the assertion “a proof exists” that calls out to have a number attached to it? Why is it similar to “a written proof will exist by tomorrow noon” and not similar to “the exquisite corpse will drink the new wine”?
In case it wasn’t clear, I rejected this line of reasoning. “A proof exists” is not a special case that needs justification.
Please refer to my first few comments on the subject. They constitute everything I wish to say on the (rather unimportant) subject of whether “a proof exists” is permitted. I don’t expect this conversation to produce any new insights.
Not so! From most interesting formal systems there are exponentially many possible deductions that can be made. All known agents can only make a bounded number of possible deductions. A widely believed conjecture implies that longer-lived agents could only make a polynomial number of possible deductions.
Um, that’s not quite what I meant… if I prove a theorem you’re unlikely to prove its negation, and if I tell you I have a proof of a theorem you’re likely to be able to find one. Moreover, in the latter case, if you’re not able to find a proof, then either I am mistaken in my belief that I have a proof, or you haven’t looked hard enough (and in principle could find a proof if you did). That doesn’t mean that every agent with “I think therefore I am” will deduce the existence of rice pudding and income tax, it only means that if one agent can deduce it, the others in principle can do the same. For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming ‘proves’ is defined by ‘a proof of finite length exists’, as happens in Gödel-numbering / PA “box”)
As you know, I am outside of mainstream opinion on this. But this:
is flatly wrong if P <> NP and not controversially so. If I reach a point in a large search space there is no guarantee that you can reach the same point in a feasible amount of time.
Recursively enumerable sure. Not feasibly enumerable.
But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible… is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?
Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn’t, I’d be very surprised.) Consequently, there is something common.
Similarly, no matter how low your prior probability for “PA is consistent”, so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.
Incidentally but importantly, lengthiness is not expected to be the only obstacle to finding a proof. Cryptography depends on this.
As to why feasibility matters: it’s because we have limited resources. You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources. If you wish to convince people to be less skeptical of infinity (your stated intention), you will have to take feasibility into account or else make a circular argument.
I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.
Not so. If I have P(PA will be shown inconsistent in fewer than m minutes) = p, then I also have P(I will prove the negation of your theorem in fewer than m+1 minutes) = p. Your ability to prove things doesn’t enter into it.
True; stick a ceteris paribus in there somewhere.
Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.
So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?
No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don’t know whether my proof was correct.
A positive but minuscule amount. This is how cryptography works! In less than a minute (aided by my very old laptop), I gave a proof of the following theorem: the second digit of each of each of the prime factors of n is 6, where
n = 44289087508518650246893852937476857335929624072788480361
It would take you much longer to find a proof (even though the proof is very short!).
Update!
About feasibility, I might say more later.
Right—but if there were no ‘fact-of-the-matter’ as to whether a proof exists, why should it be non-zero at all?
But that isn’t what either of us said. You mentioned P(you can find a proof). I am telling you (telling you, modulo standard open problems) that this can be very small even after another agent has found a proof. This is a standard family of topics in computer science.
I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that “Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems” was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount—from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it—a step I have replaced—so does that not screen off my conclusion from that step—so why are you still arguing with me?
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
No. “I can find a proof in time t” is a definite event whose probability maybe can be measured (with difficulty!). “A proof exists” is a much murkier statement and it is much more difficult to discuss its probability. (For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1. Such a consistent prior is an oracle!)
I wasn’t suggesting you were trying to get on my nerves. I just think we’re talking past each other.
As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½? But then if the digit is X, then surely a proof exists that it is X (because, in principle, the digit can be computed in finitely many steps); it must be some X in [[:digit:]], so if it is even a proof exists that it is even; otherwise (sharp swerve) one does not, and P=½. Not sure about that sharp swerve; if I condition all my probabilities on |arithmetic is consistent) then it’s ok. But then, assuming I actually need to do so, the probabilities would be different if conditioned on |arithmetic is inconsistent), and thus by finding a proof, you find evidence for or against the assertion that arithmetic is consistent. But things you can find evidence on, exist! (They are the sheep that determine your pebbles.) So where did I go wrong? (Did I slip a meta-level somewhere? It’s possible; I was juggling them a bit.)
I slipped. It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
I don’t think that’s valid—even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don’t have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. “A fact-of-the-matter exists for statement A” is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).
Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you’re confusing map and territory.
Suggestions of dire wrongness? Bah. Now I’m all tempted to read through the context. But so far I’ve failed to see the point!
Wait. “A proof exists” is murky? What sort of bizarre twisted mathematical reasoning is behind that? (Not that I’m saying mathematics doesn’t say equally bizarre and twisted things!)
In the set of all mathematical things you can say either there is one that constitutes a proof or there isn’t. That doesn’t mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.
Counter assertion: Yes there is. It’s just mundane logical uncertainty. But even if you decide you’re not allowed to assign probabilities to logical unknowns (making the whole exercise tautologically pointless) that doesn’t make “a proof exists” murky. If anything it makes it arbitrarily more clear cut. It means you’re allowed to say ‘a proof exists’ or the reverse claim but have forbidden yourself the chance to assign probabilities to the possibility. And not allowing yourself to think about it doesn’t do anything to change whether the thing exists or not.
“A proof exists, and here it is” is not at all murky. “A proof exists, I’m close to one and will show it to you soon” is not at all murky. Either statement could be true or false, and testably so. In the first case, it is just a matter of checking. In the second case, it is just a matter of waiting.
“A proof exists” full stop is in much worse shape. It has no testable consequences.
Certainly if “the set of all mathematical things” (even “the set of all mathematical proofs”) has some Platonic existence, then EC429 is in much better shape. But this is exactly the kind of thing that any “infinite set atheist” would be unwilling to take for granted.
Just! Mundane!
It has rather more testable consequences than an item that has moved out of my future light cone. Such things don’t stop existing or not existing because I personally am not able to test them. Because the universe isn’t about me any more than mathematical propositions are.
Yes. If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It’s rather close to useless in a whole lot of situations that matter. Mine is better. To illustrate that I don’t even need to construct a scenario involving Omega, bets and saving the lives of cute little puppies. It is probably the sort of thing that could make it possible to construct an AI more easily or more effectively than yours. (I’m not certain about that—but that works almost as well as an illustration.)
Rather less, i.e. none. You at least have memories of something that’s moved out of your future light cone.
What did I say that made you think I don’t regard logical uncertainty as an important problem? But you’re bluffing if you’re claiming to have a solution to this problem. Anyway any serious approach to logical uncertainty has to grapple with the fact that we have limited resources for computation (e.g. for computing probabilities!).
No I don’t. I remember hardly anything about stuff that has passed out of my future light cone. Especially about small dark objects. And since actual information is lost when stuff goes out of my future light cone some of those objects no longer have enough evidence about them in the entire light cone to be able to reconstruct what they are. Although I suppose there is a probability distribution over possible combinations of object-going-out-of-the-light-cone that could be constructed based on remaining evidence.
Things existing has nothing to do with me being able to detect them or to remember them or to in principle be able to deduce their existence based on evidence.
As for “a proof exists”, that is something that can sometimes be proven to exist or not exist. I’m told that sometimes that you can even prove that a proof exists without having a proof. Which seems to rely on a bizarre definition of proof but hey, mathematicians are into that sort of thing.
My position: Any ways of defining terms such that assigning probabilities to “a proof exists” is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.
I’m not bluffing anything. Nor am I writing a treatise on logical uncertainty. But I’ll tell you this: If I’m locked in a room, someone reliable generates a random 40 digit number from a uniform distribution right in front of me and I’m not given a computer and I’m asked to bet at even odds that it is not a prime number I am going to take that bet. Because it probably isn’t a prime number. That’s right, probably not.
It’s not because I’m clever and prestigous and am claiming to have deep wisdom about logical uncertainty. It’s because I’m not am not stupid and I assign probabilities to stuff so I can win. If being impressive and insightful requires me to only assign 0% and 100% to things like that then I don’t want to be impressive and insightful. Because that is literally for losers.
There’s very little in that I disagree with. In particular I think “winning” is a fine summary of what probability is for. But there is nothing for you to win when assigning a probability to a claim that has no testable consequences—winning is a testable consequence.
Just the most fundamental part:
Serious question: what’s special about the phrase “a proof exists” that requires you to assign a probability to it? Presumably there are some grammatically correct assertions that you don’t feel should have numbers attached to them.
I don’t consider “can have probabilities assigned” to be an exception that requires a special case. It gets treated the same as any other logical uncertainty. You can either handle logical uncertainty or you can’t.
I don’t understand. You’re saying you do have a prior probability for every grammatically correct assertion?
No, that isn’t something I’ve said.
Then what is it about the assertion “a proof exists” that calls out to have a number attached to it? Why is it similar to “a written proof will exist by tomorrow noon” and not similar to “the exquisite corpse will drink the new wine”?
In case it wasn’t clear, I rejected this line of reasoning. “A proof exists” is not a special case that needs justification.
Please refer to my first few comments on the subject. They constitute everything I wish to say on the (rather unimportant) subject of whether “a proof exists” is permitted. I don’t expect this conversation to produce any new insights.