I agree with the main tenets of your view—rejecting the platonic realm of mathematical truths, instead founding mathematics rigorously and finitely on formal systems—but you do not hold fast to it, for example the notion of a formal system “talking about itself” is appears deeply semantic rather than syntactic.
I will speak informally about rigorous mathematics.
to prove that □ does what we think it does, we have to appeal to second-order arithmetic
There are two ways I can communicate to you what PA is:
Direct: Write out the grammar in full and spend a few days with you pushing symbols around, showing you which to accept and reject. Excluding human error, any theorem you prove with this system will also be a (PA-)theorem to me.
Implemented: Write out the axiomatization in an existing (stronger) formal system (such as ZFC) which we already both understand and have the same conception of.
It is not possible to consider Godel’s theorem with only a direct understanding of PA, you need an implemented one. This isn’t a problem though, suppose you had a direct understand of PA and any understanding of ZFC, it’s very easy to give an implemented understanding of PA because you can easily check that my implementation corresponds to your direct understanding by checking it axiom at a time.
I don’t consider formal systems to talk about themselves
Inside our implemented PA we construct a predicate and then prove in ZFC that it satisfies a particular property defined in terms of PA. Specifically we construct a PA formula Bew which in ZFC is proved to satisfy a specification stating that PA |- P is equivalent to PA |- Bew(“P”). (I am just making up symbols here, Godel uses a different notation, if it’s not clear I can write it out much more formally but I’m sure this make sense). This dissolves the question of whether PA “talks about itself” or not without infinite regress, because one must have a direct understanding of some foundation to ever get started doing rigorous mathematics.
it is a basic principle of my philosophy that [..] formal systems [are real]
This doesn’t make sense to me. I have no idea what real means here. PA was created by Peano, of course if he communicated this theory to someone else (by either of the means defined earlier) or someone independently came up with an isomorphic system they will prove the same theorems. This surprising coincidence can be very misleading (it is the cause of platonism after all!) so it is important that one realize why two people share the same theorems: because they start from the same axioms and apply the same deduction rules! Not because there is some magical spirit realm of mathematical truths which they divine from.
I agree with the main tenets of your view—rejecting the platonic realm of mathematical truths, instead founding mathematics rigorously and finitely on formal systems—but you do not hold fast to it
This is because you have misconstrued my view. My philosophy of syntacticism is not formalism, nor is it a straight rejection of Platonism. Instead it is a Platonic formalism, in which the Platonic ideals are of the form “thus-and-such is derivable from these axioms with these deduction rules”, except that any attempt to talk directly and formally about those Platonic ideals fails (because that involves application of a formal system to a formal system).
It is not possible to consider Godel’s theorem with only a direct understanding of PA, you need an implemented one.
The point of my post is that ‘implemented PA’ in fact implements “PA”, and there can be no rigorous proof that the referent of “PA” is PA (because such a rigorous proof in fact ends with “”PA” is PA”, and how do we know that this new system’s “PA” is PA?)
Inside our implemented PA we construct a predicate and then prove in ZFC that it satisfies a particular property defined in terms of PA.
But how do we prove that (ZFC proves statement about PA) ⇔ (statement about PA)? Only by appealing to a higher-order theory. After all, I can define a theory T by the axioms “1. PA is consistent. 2. PA proves PA is consistent.”, where (if necessary) PA is defined by listing PA’s axioms, and yet if I were to conclude from this that PA proves PA’s consistency without thereby becoming inconsistent, you would scream blue Gödelian murder, because you have a proof in ZFC that if PA proves PA is consistent then PA is not consistent. But I have a proof in T that PA is consistent and proves PA consistent (a trivial one, in fact). So how can you justify privileging ZFC’s referent of “PA” over T’s referent of “PA”? Only by appeal to some meta-theory—or by informal justification.
I have no idea what real means here.
It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems. Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation. But that truth is not like a rock, it is not a physical object—so it must be a metaphysical one. Thus, Platonism.
But then why privilege any particular set of rules as having Platonic existence? Thus, all possible formal systems have Platonic existence.
The “magical spirit realm” does not say that “2+2=4″. But it does say that “If you apply the rules of PA to evaluate the expression 2+2, a failure to converge on 4 implies a failure to follow the rules of PA” (which failure might, for instance, be a cosmic ray hitting a neuron).
Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
It is not a mind-projection fallacy, any more than “the sheep control my pebbles” is a mind-projection fallacy. It’s just that it’s operating one meta-level higher.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
People have very similar brains and I’d bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).
Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say “2+2 = 4 is true outside of implementation” as well as the term “cognitive agent”.
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”, perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the ‘expected’ phonemes). There appears to be an underlying regularity, but it isn’t clear to me what the true reduction looks like! Is the computation the ‘bottom level’? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as “more fundamental” than physics?
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”)
But note that there are also patterns of light which we would interpret as “the wrong answer”. If arithmetic is implementation-dependent, isn’t it a bit odd that whenever we build a calculator that outputs “5″ for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.
Are we then to take computation as “more fundamental” than physics?
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
But note that there are also patterns of light which we would interpret as “the wrong answer”.
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
isn’t it a bit odd that whenever we build a calculator that outputs “5” for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)?
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4?
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
Because, if arithmetic is implementation-dependent, you should be able to do so.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
I am arguing against your concept “that truth exists outside of any implementation”.
My claim is that “truth” can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.
Our minds are not transparent windows unto veridical reality; when you look at a rock, you experience not the the rock itself, but your mind’s representation of the rock, reconstructed from photons bouncing off its surface.
To your question:
If that is so, then how come others tend to reach the same truth?
These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them… eventually concluding that “others tend to reach the same truth”. Another mind with its own unique perspective may come to a different conclusion such as “Fred is wearing a funny hat.”
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
I think a meta- has gone missing here: I can’t be certain that others tend to reach the same truth (rather than funny hats), and I can’t be certain that 2+2=4. I can’t even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).
That “truth” in the map doesn’t imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude “5!”, then you actually implemented the computation “2+2 with such-and-such a cosmic ray bitflip”.
I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.
Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don’t exist within physical reality.
“Sheep” is at a different meta-level than the chain of physical inference that led to that classification.
That “truth” in the map doesn’t imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject.
“Truth” is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that “truth” is in the set of stuff that has meaning within the territory.
When you look at the statement 2+2=4 you think some form of “hey, that’s true”. When I look at the statement, I also think some form of “hey, that’s true”. We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.
For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.
Common contexts allow us to agree on the consequences of arithmetic.
The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
When you look at the statement 2+2=4 you think some form of “hey, that’s true”. When I look at the statement, I also think some form of “hey, that’s true”. We can then talk and both come to our own unique conclusion that the other person agrees with us.
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it’s not reflective. Putting two pebbles next to two pebbles also agrees.
Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens—and that this implies there is something to be converged upon. At the least, it is ‘true’ that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like “1=0”).
These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The ‘physical existence hypothesis’ has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the ‘mathematical realism hypothesis’. Thus by Occam, I conclude mathematical realism and no physical existence.
I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it’s 8AM, I’ve been up all night, and I need some sleep :(
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it’s not reflective. Putting two pebbles next to two pebbles also agrees.
Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.
Agreement is an evaluation performed by your mind from its unique position in the universe.
… this implies there is something to be converged upon.
The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.
Mathematical realism also explains my observations and operates entirely within the mathematical universe; …
Mathematical realism appears to confuse the map for the territory—as does scientific realism, as does physical realism.
When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.
Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as “We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition.”
I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.
When you state:
Putting two pebbles next to two pebbles also agrees.
I look for the context of evaluation that produces this result—and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.
It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems.
Not so! From most interesting formal systems there are exponentially many possible deductions that can be made. All known agents can only make a bounded number of possible deductions. A widely believed conjecture implies that longer-lived agents could only make a polynomial number of possible deductions.
Um, that’s not quite what I meant… if I prove a theorem you’re unlikely to prove its negation, and if I tell you I have a proof of a theorem you’re likely to be able to find one. Moreover, in the latter case, if you’re not able to find a proof, then either I am mistaken in my belief that I have a proof, or you haven’t looked hard enough (and in principle could find a proof if you did). That doesn’t mean that every agent with “I think therefore I am” will deduce the existence of rice pudding and income tax, it only means that if one agent can deduce it, the others in principle can do the same. For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming ‘proves’ is defined by ‘a proof of finite length exists’, as happens in Gödel-numbering / PA “box”)
Um, that’s not quite what I meant… if I prove a theorem you’re unlikely to prove its negation,
As you know, I am outside of mainstream opinion on this. But this:
and if I tell you I have a proof of a theorem you’re likely to be able to find one.
is flatly wrong if P <> NP and not controversially so. If I reach a point in a large search space there is no guarantee that you can reach the same point in a feasible amount of time.
For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming ‘proves’ is defined by ‘a proof of finite length exists’, as happens in Gödel-numbering / PA “box”)
Recursively enumerable sure. Not feasibly enumerable.
But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible… is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?
Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn’t, I’d be very surprised.) Consequently, there is something common.
Similarly, no matter how low your prior probability for “PA is consistent”, so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.
But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof
Incidentally but importantly, lengthiness is not expected to be the only obstacle to finding a proof. Cryptography depends on this.
As to why feasibility matters: it’s because we have limited resources. You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources. If you wish to convince people to be less skeptical of infinity (your stated intention), you will have to take feasibility into account or else make a circular argument.
But saying that things are true only by virtue of their proof being feasible… is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?
I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.
Similarly, no matter how low your prior probability for “PA is consistent”, so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.
Not so. If I have P(PA will be shown inconsistent in fewer than m minutes) = p, then I also have P(I will prove the negation of your theorem in fewer than m+1 minutes) = p. Your ability to prove things doesn’t enter into it.
lengthiness is not expected to be the only obstacle to finding a proof
True; stick a ceteris paribus in there somewhere.
You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources.
Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.
I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.
So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?
P(I will prove the negation of your theorem in fewer than m+1 minutes) = p
No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don’t know whether my proof was correct.
Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards.
A positive but minuscule amount. This is how cryptography works! In less than a minute (aided by my very old laptop), I gave a proof of the following theorem: the second digit of each of each of the prime factors of n is 6, where
n = 44289087508518650246893852937476857335929624072788480361
It would take you much longer to find a proof (even though the proof is very short!).
But that isn’t what either of us said. You mentioned P(you can find a proof). I am telling you (telling you, modulo standard open problems) that this can be very small even after another agent has found a proof. This is a standard family of topics in computer science.
I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that “Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems” was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount—from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it—a step I have replaced—so does that not screen off my conclusion from that step—so why are you still arguing with me?
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount—from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists.
No. “I can find a proof in time t” is a definite event whose probability maybe can be measured (with difficulty!). “A proof exists” is a much murkier statement and it is much more difficult to discuss its probability. (For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1. Such a consistent prior is an oracle!)
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
I wasn’t suggesting you were trying to get on my nerves. I just think we’re talking past each other.
“A proof exists” is a much murkier statement and it is much more difficult to discuss its probability.
As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½? But then if the digit is X, then surely a proof exists that it is X (because, in principle, the digit can be computed in finitely many steps); it must be some X in [[:digit:]], so if it is even a proof exists that it is even; otherwise (sharp swerve) one does not, and P=½. Not sure about that sharp swerve; if I condition all my probabilities on |arithmetic is consistent) then it’s ok. But then, assuming I actually need to do so, the probabilities would be different if conditioned on |arithmetic is inconsistent), and thus by finding a proof, you find evidence for or against the assertion that arithmetic is consistent. But things you can find evidence on, exist! (They are the sheep that determine your pebbles.) So where did I go wrong? (Did I slip a meta-level somewhere? It’s possible; I was juggling them a bit.)
As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
I slipped. It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½?
If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.
It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P($n$th digit is even) > P(axioms) or P($n$ digit is odd) > P(axioms).
I don’t think that’s valid—even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don’t have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. “A fact-of-the-matter exists for statement A” is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).
Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you’re confusing map and territory.
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
Suggestions of dire wrongness? Bah. Now I’m all tempted to read through the context. But so far I’ve failed to see the point!
Wait. “A proof exists” is murky? What sort of bizarre twisted mathematical reasoning is behind that? (Not that I’m saying mathematics doesn’t say equally bizarre and twisted things!)
In the set of all mathematical things you can say either there is one that constitutes a proof or there isn’t. That doesn’t mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.
For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1.
Counter assertion: Yes there is. It’s just mundane logical uncertainty. But even if you decide you’re not allowed to assign probabilities to logical unknowns (making the whole exercise tautologically pointless) that doesn’t make “a proof exists” murky. If anything it makes it arbitrarily more clear cut. It means you’re allowed to say ‘a proof exists’ or the reverse claim but have forbidden yourself the chance to assign probabilities to the possibility. And not allowing yourself to think about it doesn’t do anything to change whether the thing exists or not.
“A proof exists, and here it is” is not at all murky. “A proof exists, I’m close to one and will show it to you soon” is not at all murky. Either statement could be true or false, and testably so. In the first case, it is just a matter of checking. In the second case, it is just a matter of waiting.
“A proof exists” full stop is in much worse shape. It has no testable consequences.
In the set of all mathematical things you can say either there is one that constitutes a proof or there isn’t. That doesn’t mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.
Certainly if “the set of all mathematical things” (even “the set of all mathematical proofs”) has some Platonic existence, then EC429 is in much better shape. But this is exactly the kind of thing that any “infinite set atheist” would be unwilling to take for granted.
“A proof exists” full stop is in much worse shape. It has no testable consequences.
It has rather more testable consequences than an item that has moved out of my future light cone. Such things don’t stop existing or not existing because I personally am not able to test them. Because the universe isn’t about me any more than mathematical propositions are.
It’s just mundane logical uncertainty.
Just! Mundane!
Yes. If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It’s rather close to useless in a whole lot of situations that matter. Mine is better. To illustrate that I don’t even need to construct a scenario involving Omega, bets and saving the lives of cute little puppies. It is probably the sort of thing that could make it possible to construct an AI more easily or more effectively than yours. (I’m not certain about that—but that works almost as well as an illustration.)
It has rather more testable consequences than an item that has moved out of my future light cone.
Rather less, i.e. none. You at least have memories of something that’s moved out of your future light cone.
If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It’s rather close to useless in a whole lot of situations that matter.
What did I say that made you think I don’t regard logical uncertainty as an important problem? But you’re bluffing if you’re claiming to have a solution to this problem. Anyway any serious approach to logical uncertainty has to grapple with the fact that we have limited resources for computation (e.g. for computing probabilities!).
Rather less, i.e. none. You at least have memories of something that’s moved out of your future light cone.
No I don’t. I remember hardly anything about stuff that has passed out of my future light cone. Especially about small dark objects. And since actual information is lost when stuff goes out of my future light cone some of those objects no longer have enough evidence about them in the entire light cone to be able to reconstruct what they are. Although I suppose there is a probability distribution over possible combinations of object-going-out-of-the-light-cone that could be constructed based on remaining evidence.
Things existing has nothing to do with me being able to detect them or to remember them or to in principle be able to deduce their existence based on evidence.
As for “a proof exists”, that is something that can sometimes be proven to exist or not exist. I’m told that sometimes that you can even prove that a proof exists without having a proof. Which seems to rely on a bizarre definition of proof but hey, mathematicians are into that sort of thing.
My position: Any ways of defining terms such that assigning probabilities to “a proof exists” is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.
What did I say that made you think I don’t regard logical uncertainty as an important problem? But you’re bluffing if you’re claiming to have a solution to this problem.
I’m not bluffing anything. Nor am I writing a treatise on logical uncertainty. But I’ll tell you this: If I’m locked in a room, someone reliable generates a random 40 digit number from a uniform distribution right in front of me and I’m not given a computer and I’m asked to bet at even odds that it is not a prime number I am going to take that bet. Because it probably isn’t a prime number. That’s right, probably not.
It’s not because I’m clever and prestigous and am claiming to have deep wisdom about logical uncertainty. It’s because I’m not am not stupid and I assign probabilities to stuff so I can win. If being impressive and insightful requires me to only assign 0% and 100% to things like that then I don’t want to be impressive and insightful. Because that is literally for losers.
There’s very little in that I disagree with. In particular I think “winning” is a fine summary of what probability is for. But there is nothing for you to win when assigning a probability to a claim that has no testable consequences—winning is a testable consequence.
My position: Any ways of defining terms such that assigning probabilities to “a proof exists” is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.
Serious question: what’s special about the phrase “a proof exists” that requires you to assign a probability to it? Presumably there are some grammatically correct assertions that you don’t feel should have numbers attached to them.
I don’t consider “can have probabilities assigned” to be an exception that requires a special case. It gets treated the same as any other logical uncertainty. You can either handle logical uncertainty or you can’t.
Then what is it about the assertion “a proof exists” that calls out to have a number attached to it? Why is it similar to “a written proof will exist by tomorrow noon” and not similar to “the exquisite corpse will drink the new wine”?
In case it wasn’t clear, I rejected this line of reasoning. “A proof exists” is not a special case that needs justification.
Please refer to my first few comments on the subject. They constitute everything I wish to say on the (rather unimportant) subject of whether “a proof exists” is permitted. I don’t expect this conversation to produce any new insights.
I agree with the main tenets of your view—rejecting the platonic realm of mathematical truths, instead founding mathematics rigorously and finitely on formal systems—but you do not hold fast to it, for example the notion of a formal system “talking about itself” is appears deeply semantic rather than syntactic.
I will speak informally about rigorous mathematics.
There are two ways I can communicate to you what PA is:
Direct: Write out the grammar in full and spend a few days with you pushing symbols around, showing you which to accept and reject. Excluding human error, any theorem you prove with this system will also be a (PA-)theorem to me.
Implemented: Write out the axiomatization in an existing (stronger) formal system (such as ZFC) which we already both understand and have the same conception of.
It is not possible to consider Godel’s theorem with only a direct understanding of PA, you need an implemented one. This isn’t a problem though, suppose you had a direct understand of PA and any understanding of ZFC, it’s very easy to give an implemented understanding of PA because you can easily check that my implementation corresponds to your direct understanding by checking it axiom at a time.
Inside our implemented PA we construct a predicate and then prove in ZFC that it satisfies a particular property defined in terms of PA. Specifically we construct a PA formula Bew which in ZFC is proved to satisfy a specification stating that PA |- P is equivalent to PA |- Bew(“P”). (I am just making up symbols here, Godel uses a different notation, if it’s not clear I can write it out much more formally but I’m sure this make sense). This dissolves the question of whether PA “talks about itself” or not without infinite regress, because one must have a direct understanding of some foundation to ever get started doing rigorous mathematics.
This doesn’t make sense to me. I have no idea what real means here. PA was created by Peano, of course if he communicated this theory to someone else (by either of the means defined earlier) or someone independently came up with an isomorphic system they will prove the same theorems. This surprising coincidence can be very misleading (it is the cause of platonism after all!) so it is important that one realize why two people share the same theorems: because they start from the same axioms and apply the same deduction rules! Not because there is some magical spirit realm of mathematical truths which they divine from.
This is because you have misconstrued my view. My philosophy of syntacticism is not formalism, nor is it a straight rejection of Platonism. Instead it is a Platonic formalism, in which the Platonic ideals are of the form “thus-and-such is derivable from these axioms with these deduction rules”, except that any attempt to talk directly and formally about those Platonic ideals fails (because that involves application of a formal system to a formal system).
The point of my post is that ‘implemented PA’ in fact implements “PA”, and there can be no rigorous proof that the referent of “PA” is PA (because such a rigorous proof in fact ends with “”PA” is PA”, and how do we know that this new system’s “PA” is PA?)
But how do we prove that (ZFC proves statement about PA) ⇔ (statement about PA)? Only by appealing to a higher-order theory. After all, I can define a theory T by the axioms “1. PA is consistent. 2. PA proves PA is consistent.”, where (if necessary) PA is defined by listing PA’s axioms, and yet if I were to conclude from this that PA proves PA’s consistency without thereby becoming inconsistent, you would scream blue Gödelian murder, because you have a proof in ZFC that if PA proves PA is consistent then PA is not consistent. But I have a proof in T that PA is consistent and proves PA consistent (a trivial one, in fact). So how can you justify privileging ZFC’s referent of “PA” over T’s referent of “PA”? Only by appeal to some meta-theory—or by informal justification.
It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems. Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation. But that truth is not like a rock, it is not a physical object—so it must be a metaphysical one. Thus, Platonism.
But then why privilege any particular set of rules as having Platonic existence? Thus, all possible formal systems have Platonic existence.
The “magical spirit realm” does not say that “2+2=4″. But it does say that “If you apply the rules of PA to evaluate the expression 2+2, a failure to converge on 4 implies a failure to follow the rules of PA” (which failure might, for instance, be a cosmic ray hitting a neuron).
Do I make sense now?
You are experiencing a mind projection fallacy.
The theorems don’t exist unless an implementation produces them and once produced they only exist within a context that can represent them.
In the same way, the truth you refer to is generated by and exists within your mind. It has no existence outside of that implementation.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
It is not a mind-projection fallacy, any more than “the sheep control my pebbles” is a mind-projection fallacy. It’s just that it’s operating one meta-level higher.
People have very similar brains and I’d bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).
Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say “2+2 = 4 is true outside of implementation” as well as the term “cognitive agent”.
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”, perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the ‘expected’ phonemes). There appears to be an underlying regularity, but it isn’t clear to me what the true reduction looks like! Is the computation the ‘bottom level’? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as “more fundamental” than physics?
Does this make sense?
But note that there are also patterns of light which we would interpret as “the wrong answer”. If arithmetic is implementation-dependent, isn’t it a bit odd that whenever we build a calculator that outputs “5″ for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
I am arguing against your concept “that truth exists outside of any implementation”.
My claim is that “truth” can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.
As stated here
To your question:
These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them… eventually concluding that “others tend to reach the same truth”. Another mind with its own unique perspective may come to a different conclusion such as “Fred is wearing a funny hat.”
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
I think a meta- has gone missing here: I can’t be certain that others tend to reach the same truth (rather than funny hats), and I can’t be certain that 2+2=4. I can’t even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).
That “truth” in the map doesn’t imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude “5!”, then you actually implemented the computation “2+2 with such-and-such a cosmic ray bitflip”.
I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.
Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don’t exist within physical reality.
“Sheep” is at a different meta-level than the chain of physical inference that led to that classification.
“Truth” is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that “truth” is in the set of stuff that has meaning within the territory.
When you look at the statement 2+2=4 you think some form of “hey, that’s true”. When I look at the statement, I also think some form of “hey, that’s true”. We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.
For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.
Common contexts allow us to agree on the consequences of arithmetic.
The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it’s not reflective. Putting two pebbles next to two pebbles also agrees.
Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens—and that this implies there is something to be converged upon. At the least, it is ‘true’ that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like “1=0”).
Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The ‘physical existence hypothesis’ has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the ‘mathematical realism hypothesis’. Thus by Occam, I conclude mathematical realism and no physical existence.
I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it’s 8AM, I’ve been up all night, and I need some sleep :(
Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.
Agreement is an evaluation performed by your mind from its unique position in the universe.
The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.
Mathematical realism appears to confuse the map for the territory—as does scientific realism, as does physical realism.
When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.
Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as “We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition.”
I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.
When you state:
I look for the context of evaluation that produces this result—and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.
Not so! From most interesting formal systems there are exponentially many possible deductions that can be made. All known agents can only make a bounded number of possible deductions. A widely believed conjecture implies that longer-lived agents could only make a polynomial number of possible deductions.
Um, that’s not quite what I meant… if I prove a theorem you’re unlikely to prove its negation, and if I tell you I have a proof of a theorem you’re likely to be able to find one. Moreover, in the latter case, if you’re not able to find a proof, then either I am mistaken in my belief that I have a proof, or you haven’t looked hard enough (and in principle could find a proof if you did). That doesn’t mean that every agent with “I think therefore I am” will deduce the existence of rice pudding and income tax, it only means that if one agent can deduce it, the others in principle can do the same. For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming ‘proves’ is defined by ‘a proof of finite length exists’, as happens in Gödel-numbering / PA “box”)
As you know, I am outside of mainstream opinion on this. But this:
is flatly wrong if P <> NP and not controversially so. If I reach a point in a large search space there is no guarantee that you can reach the same point in a feasible amount of time.
Recursively enumerable sure. Not feasibly enumerable.
But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible… is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?
Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn’t, I’d be very surprised.) Consequently, there is something common.
Similarly, no matter how low your prior probability for “PA is consistent”, so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.
Incidentally but importantly, lengthiness is not expected to be the only obstacle to finding a proof. Cryptography depends on this.
As to why feasibility matters: it’s because we have limited resources. You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources. If you wish to convince people to be less skeptical of infinity (your stated intention), you will have to take feasibility into account or else make a circular argument.
I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.
Not so. If I have P(PA will be shown inconsistent in fewer than m minutes) = p, then I also have P(I will prove the negation of your theorem in fewer than m+1 minutes) = p. Your ability to prove things doesn’t enter into it.
True; stick a ceteris paribus in there somewhere.
Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.
So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?
No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don’t know whether my proof was correct.
A positive but minuscule amount. This is how cryptography works! In less than a minute (aided by my very old laptop), I gave a proof of the following theorem: the second digit of each of each of the prime factors of n is 6, where
n = 44289087508518650246893852937476857335929624072788480361
It would take you much longer to find a proof (even though the proof is very short!).
Update!
About feasibility, I might say more later.
Right—but if there were no ‘fact-of-the-matter’ as to whether a proof exists, why should it be non-zero at all?
But that isn’t what either of us said. You mentioned P(you can find a proof). I am telling you (telling you, modulo standard open problems) that this can be very small even after another agent has found a proof. This is a standard family of topics in computer science.
I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that “Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems” was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount—from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it—a step I have replaced—so does that not screen off my conclusion from that step—so why are you still arguing with me?
I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.
No. “I can find a proof in time t” is a definite event whose probability maybe can be measured (with difficulty!). “A proof exists” is a much murkier statement and it is much more difficult to discuss its probability. (For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1. Such a consistent prior is an oracle!)
I wasn’t suggesting you were trying to get on my nerves. I just think we’re talking past each other.
As a first approximation, what’s wrong with “\lim_{t → \infty} P(I can find a proof in time t)”?
Also, I don’t see why the prior has to be oracular; what’s wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½? But then if the digit is X, then surely a proof exists that it is X (because, in principle, the digit can be computed in finitely many steps); it must be some X in [[:digit:]], so if it is even a proof exists that it is even; otherwise (sharp swerve) one does not, and P=½. Not sure about that sharp swerve; if I condition all my probabilities on |arithmetic is consistent) then it’s ok. But then, assuming I actually need to do so, the probabilities would be different if conditioned on |arithmetic is inconsistent), and thus by finding a proof, you find evidence for or against the assertion that arithmetic is consistent. But things you can find evidence on, exist! (They are the sheep that determine your pebbles.) So where did I go wrong? (Did I slip a meta-level somewhere? It’s possible; I was juggling them a bit.)
I slipped. It’s P(I will find a proof in time t) that is asking for the probability of a definite event. It’s not that evaluating this number at large t is so problematic, it’s that it doesn’t capture what people usually mean by “provable in principle.”
If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.
Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it’s the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.
There’s an obvious problem; we can’t determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3… Then for any sequence, any desired level of certainty is obtained in finite time.
If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?
Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant—agents for whom “in principle” is not different from “in practice.” There are no such agents under the sun.
But you don’t have to have unlimited resources, you just have to have X large but finite amount of resources, and you don’t know how big X is.
Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited—because you don’t know how big X is. But you still know it’s finite. “Feasibly computable” is not the same thing as “computable”. “In principle” is, in principle, well defined. “In practice” is not well-defined, because as soon as you have X resources, it becomes possible “in practice” for you to find the proof.
I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be “unfeasible” (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.
I don’t think that’s valid—even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don’t have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. “A fact-of-the-matter exists for statement A” is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).
Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you’re confusing map and territory.
Suggestions of dire wrongness? Bah. Now I’m all tempted to read through the context. But so far I’ve failed to see the point!
Wait. “A proof exists” is murky? What sort of bizarre twisted mathematical reasoning is behind that? (Not that I’m saying mathematics doesn’t say equally bizarre and twisted things!)
In the set of all mathematical things you can say either there is one that constitutes a proof or there isn’t. That doesn’t mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.
Counter assertion: Yes there is. It’s just mundane logical uncertainty. But even if you decide you’re not allowed to assign probabilities to logical unknowns (making the whole exercise tautologically pointless) that doesn’t make “a proof exists” murky. If anything it makes it arbitrarily more clear cut. It means you’re allowed to say ‘a proof exists’ or the reverse claim but have forbidden yourself the chance to assign probabilities to the possibility. And not allowing yourself to think about it doesn’t do anything to change whether the thing exists or not.
“A proof exists, and here it is” is not at all murky. “A proof exists, I’m close to one and will show it to you soon” is not at all murky. Either statement could be true or false, and testably so. In the first case, it is just a matter of checking. In the second case, it is just a matter of waiting.
“A proof exists” full stop is in much worse shape. It has no testable consequences.
Certainly if “the set of all mathematical things” (even “the set of all mathematical proofs”) has some Platonic existence, then EC429 is in much better shape. But this is exactly the kind of thing that any “infinite set atheist” would be unwilling to take for granted.
Just! Mundane!
It has rather more testable consequences than an item that has moved out of my future light cone. Such things don’t stop existing or not existing because I personally am not able to test them. Because the universe isn’t about me any more than mathematical propositions are.
Yes. If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It’s rather close to useless in a whole lot of situations that matter. Mine is better. To illustrate that I don’t even need to construct a scenario involving Omega, bets and saving the lives of cute little puppies. It is probably the sort of thing that could make it possible to construct an AI more easily or more effectively than yours. (I’m not certain about that—but that works almost as well as an illustration.)
Rather less, i.e. none. You at least have memories of something that’s moved out of your future light cone.
What did I say that made you think I don’t regard logical uncertainty as an important problem? But you’re bluffing if you’re claiming to have a solution to this problem. Anyway any serious approach to logical uncertainty has to grapple with the fact that we have limited resources for computation (e.g. for computing probabilities!).
No I don’t. I remember hardly anything about stuff that has passed out of my future light cone. Especially about small dark objects. And since actual information is lost when stuff goes out of my future light cone some of those objects no longer have enough evidence about them in the entire light cone to be able to reconstruct what they are. Although I suppose there is a probability distribution over possible combinations of object-going-out-of-the-light-cone that could be constructed based on remaining evidence.
Things existing has nothing to do with me being able to detect them or to remember them or to in principle be able to deduce their existence based on evidence.
As for “a proof exists”, that is something that can sometimes be proven to exist or not exist. I’m told that sometimes that you can even prove that a proof exists without having a proof. Which seems to rely on a bizarre definition of proof but hey, mathematicians are into that sort of thing.
My position: Any ways of defining terms such that assigning probabilities to “a proof exists” is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.
I’m not bluffing anything. Nor am I writing a treatise on logical uncertainty. But I’ll tell you this: If I’m locked in a room, someone reliable generates a random 40 digit number from a uniform distribution right in front of me and I’m not given a computer and I’m asked to bet at even odds that it is not a prime number I am going to take that bet. Because it probably isn’t a prime number. That’s right, probably not.
It’s not because I’m clever and prestigous and am claiming to have deep wisdom about logical uncertainty. It’s because I’m not am not stupid and I assign probabilities to stuff so I can win. If being impressive and insightful requires me to only assign 0% and 100% to things like that then I don’t want to be impressive and insightful. Because that is literally for losers.
There’s very little in that I disagree with. In particular I think “winning” is a fine summary of what probability is for. But there is nothing for you to win when assigning a probability to a claim that has no testable consequences—winning is a testable consequence.
Just the most fundamental part:
Serious question: what’s special about the phrase “a proof exists” that requires you to assign a probability to it? Presumably there are some grammatically correct assertions that you don’t feel should have numbers attached to them.
I don’t consider “can have probabilities assigned” to be an exception that requires a special case. It gets treated the same as any other logical uncertainty. You can either handle logical uncertainty or you can’t.
I don’t understand. You’re saying you do have a prior probability for every grammatically correct assertion?
No, that isn’t something I’ve said.
Then what is it about the assertion “a proof exists” that calls out to have a number attached to it? Why is it similar to “a written proof will exist by tomorrow noon” and not similar to “the exquisite corpse will drink the new wine”?
In case it wasn’t clear, I rejected this line of reasoning. “A proof exists” is not a special case that needs justification.
Please refer to my first few comments on the subject. They constitute everything I wish to say on the (rather unimportant) subject of whether “a proof exists” is permitted. I don’t expect this conversation to produce any new insights.