Completeness, incompleteness, and what it all means: first versus second order logic
First order arithmetic is incomplete. Except that it’s also complete. Second order arithmetic is more expressive—except when it’s not—and is also incomplete and also complete, except when it means something different. Oh, and full second order-logic might not really be a logic at all. But then, first order logic has no idea what the reals and natural numbers are, especially when it tries to talk about them.
That was about the state of my confusion, and I set out to try and clear it up. Here I’ll try and share an understanding of what is really going on with first and second order logic and why they differ so radically. It will be deliberately informal, so I won’t be distinguishing between functions, predicates and subsets, and will be using little notation. It’ll be exactly what I wish someone had told me before I started looking into the whole field.
Meaningful Models
An old man starts talking to you about addition, subtraction and multiplication, and how they interact. You assume he was talking about the integers; turns out he means the rational numbers. The integers and the rationals are both models of addition, subtraction and multiplication, in that they obey all the properties that the old man set out. But notice though he had the rationals in mind, he didn’t mention them at all, he just listed the properties, and the rational numbers turned out, very non-coincidentally, to obey them.
These models are generally taken to give meaning to the abstract symbols in the axioms—to give semantics to the syntax. In this view, “for all x,y xy=yx” is a series of elegant squiggles, but once we have the model of the integers (or the rationals) in mind, we realise that this means that multiplication is commutative.
Similarly, models can define the “truth” of sentences. Consider the following sentences:
(1) 2 has a multiplicative inverse.
(2) There exists a number that squares to −1.
(3) 2 is not equal to zero.
(4) If a+b=0, then a2=b2.
(5) No number is equal to zero.
Are these true? You and the old man would disagree on (1), with him saying yes and you saying no—your models have enabled you to attach truth-values to the statement. You would both claim (2) is false, but there are other models—such as the complex numbers—where it is true. You would both claim (3) is true, but there are other models—such as the field with two elements—where it is false. So truth is model dependent.
The last two are interesting, because it turns out that (4) is true in every model, and (5) is false. Statements like (4) and “not (5)” that are true in every model are called valid. Since they are independent of the choice of models, these statements are, in a certain sense, true from pure syntax. Both these statement can also be deduced purely from the axioms. It would be nice if all valid statements could also be deduced. But only first order logic allows this.
What would also be nice if you could agree on the model you’re using. Maybe the old man could add “2 (and every non-zero number) has a multiplicative inverse”, giving the field axioms, and ruling your integers right out. But there are still many fields—the rationals, the reals, the complex numbers, and many in between. But maybe with a few more axioms, you could really narrow thing downs, and treat the axioms and the model interchangeably. But only second order logic allows this.
First order fun
First order theories are those where you can quantify over the basic objects in your theory, and phrase statements like “all Greeks enjoy dancing” and “there exists a blind millionaire”. This distinguishes them from second order theories where you can quantify over higher order objects (predicates and functions), phrasing sentiments like “all nationalities are equal” or “there exists a dominant social class”—in first order logic with humans as basic objects, nationality and social class are predicates, and you can’t quantify over them. You don’t appreciate how limiting first order logic can be until you’ve worked with it a while; nevertheless, it’s a good logic to start with and possesses certain key properties not shared by higher order logics. Let’s start with the most famous result, the incompleteness theorem.
Gödel’s incompleteness theorem
Gödel’s (first) incompleteness theorem, is a theorem about an arithmetic but also (implicitly) about a model. The implicit model is the natural numbers: any arithmetic that can model them suffers from the incompleteness theorem. But it is not really about any model, beyond that requirement: it’s an intrinsic limitation of the system.
Let’s assume we have the usual (first order) Peano axioms for ordering, addition and multiplication. We also need an infinity of axioms to define induction, but that isn’t as bad as it seem: given a specific sentence, it’s easy to check whether or not it’s an axiom, in a fast and efficient way. To nobody’s surprise, the natural number are a model of first order Peano arithmetic.
And inside this model, we can construct the Gödel sentence G, which is equivalent with “there is no proof of G”. By ‘proof’, we mean a natural number n that is the Gödel number that encodes a proof of G. Obviously, Peano arithmetic cannot prove G without being inconsistent; but this is precisely what G is saying, so we can actually see that G is true. And hence that “not G” cannot be provable if our arithmetic is consistent. This is the incompleteness theorem: neither G nor “not G” are provable, so the proof system is “incomplete”.
Gödel’s completeness theorem
Enough with incompleteness; what about Gödel completeness theorem? Unlike the previous theorem, this is a statement about about the axiomatic system and all of its models. It simply says that if a sentence is valid (true in every model) for a first order theory, then it can be proved from the axioms. This provides a bridge between the semantic concept of “true” (true in every model) and the syntactic concept of provable (can be proved by these formal manipulations). It also implies that we can enumerate all the sentences that are valid in a first order system, simply by enumerating all the proofs.
Where does this leave the Gödel sentence G? We’ve seen we can’t prove it from the axioms, hence it cannot be true in all models. Therefore there must exist a model N’ of first order Peano arithmetic in which G is false. What does that mean? G claims that “there does not exist a number n with (certain properties)”, so if G is false, such an n does exist. Now we know (because we’ve constructed it that way) that if that n were a natural number, then those (certain properties) means that it must encode a proof of G. Since there is no proof of G, n cannot be a natural number, but must be an extra, a non-standard number, from beyond our usual universe. This also means that those (certain properties) do not capture what we thought they did: they only mean “encodes a proof of G” for the standard natural numbers.
This seems somewhat troubling, that Peano arithmetic would admit two distinct models and fail to say what we thought it said; but it gets worse.
The Löwenheim–Skolem theorem
The Löwenheim–Skolem theorem says that if any countable first order theory (such as Peano arithmetic) has an infinite model, then it has a model for every size (“cardinality”) of infinity. Therefore first order Peano arithmetic has to have many, many models; at a minimum, it has to have a model of same size as the reals.
This also means that no matter how much first order information we add to Peano arithmetic, we cannot restrict it to only being about the natural numbers; the models and the axioms can never be interchangeable. But it gets still worse.
Uncountable models of Peano arithmetic are bad enough, but it turns out that Peano arithmetic also has many other countable models—models of same size as the natural numbers, but weirdly different. Weirdly is an apt summation of these models where neither multiplication nor addition can be computed.
So first order Peano arithmetic is not really about the natural numbers at all—but about the natural numbers and all these strange countable and uncountable models that we can’t really describe. Maybe second order theories will do better?
Second order scariness
In second order theories, we can finally do what we’ve been itching to do: apply the existential and universal quantifiers to predicates, functions, sets of numbers and objects of that ilk. We can triumphantly toss away the infinitely many axioms needed to define induction in first order Peano arithmetic, and replace them with a simple:
“Every (non-empty) set of numbers has a least element.”
This, as every schoolboy should know, is enough to uniquely define the natural numbers. Or is it?
The importance of semantics
That sentence remains a series of squiggles until we’ve decided what those squiggles actually mean. This takes on an extra importance in second order logic that it didn’t have in first order. When we say “every set of numbers”, what do we mean? In terms of meaning and models, what models are we going to be considering?
The first idea is the obvious one: “every set of numbers” means, duh, “every set of numbers”. When we specify a model, we’ll give the ‘universe of discourse’, the basic objects (maybe the natural numbers or the reals), and the quantifier ‘every’ will range over all possible subsets of this universe (every subset of the natural or real numbers). This is called the full or standard semantics for second order arithmetic, and the models are called full models.
But note what we have done here: we have brought in extra information to clarify what we are talking about. In order to defined full semantics, we’ve had to bring set theory into the mix, to define all these subset. This caused Quine to accuse second order logic of not being a logic at all, but a branch of set theory.
Also with all these Russell Paradoxes flying around, we might be a bit wary of jumping immediately into the ‘every set’ semantics. Maybe instead of defining a model by just giving the ‘universe of discourse’, we would want to also define the subsets we are interested in, listing explicitly all the sets we are going to allow the quantifiers to range over.
But this could get ridiculous—do we really want a model which includes the natural numbers, but only allows us to quantify over the sets {1, 7, 13908}, {0} and the empty set? We can define, for instance, what an even number is (a number that is equal to two times something), and so why can’t we get the set of even numbers into our model?
Henkin semantics
We really want our ‘universe of quantifiable sets’ to include any set we can define. It turns out this something we can get from inside second-order logic, by using the comprehension axioms. They roughly say that “any set/predicate we can define, is in the universe of sets/predicates we can quantify over”. There are infinitely many such comprehension axioms, covering each definition.
Then Henkin semantics is second-order logic, with all the comprehension axioms, and no further restrictions on the possible models. These ‘Henkin models’ will have both a defined universe of discourse (the list of basic objects in the model we can quantify over) and a defined ‘universe of sets/predicates’ (a list of sets of basic objects that we can quantify over), with the comprehension axioms making sure they are compatible. Though they are called ‘Henkin semantics’, they could really be called ‘Henkin syntaxes’, since we aren’t giving any extra restrictions on the models apart from the internal axioms.
It should be noted that a full model (where the ‘universe of sets’ include all possible subsets) automatically obeys the comprehension axioms, since it can quantify over every set. So every full model in a Henkin model, and it might seem that Henkin semantics are a simple extension of full semantics, and that they have a greater ‘expressive power’. Few things could be further from the truth.
First or second order?
If the old man of previously had claimed “every number is even”, and, when challenged with “3″ had responded “3 is not a number”, you might be justified in questioning his grasp of the meaning of ‘every’ and ‘number’. Similarly, if he had said “every (non-empty) set of integers has a least element,” and when challenged with “the negative integers” had responded “that collection is not a set”, you would also question his use of ‘every’.
Similarly, since Henkin semantics allows us to restrict the meaning of “every set”, depending on the model under consideration, statements such as “every set is blah” are much weaker in Henkin semantics than in full semantics. For instance, take the axioms for an ordered field, and add:
“Every (non-empty) bounded set has a supremum”
As every schoolgirl should know, this is enough to model the real numbers… in full semantics. But in Henkin semantics, ‘every bounded set’ can mean ‘every definable bounded set’ and we can take the ‘definable reals’ as a model: the supremum of a definable set is definable. And this does not include all the real numbers; for a start, the definable reals are countable, so there are far fewer of them than there are reals.
This may seem a feature, rather than a bug. What are these strange, ‘non-definable reals’ that clutter up the place; why would we want them anyway? But the definable reals are just one Henkin model of these axioms, there are others perfectly valid models, including the reals themselves. So these axioms have not allowed us, in Henkin semantics, to pick out one particular model.
This seems familiar: the first order Peano axioms also failed to specify the natural numbers. The familiarity is not an illusion: Henkin semantics is actually a first order theory (a ‘many sorted’ one, where some classes of objects have different properties). Hence the completeness theorem still applies: any result true in every Henkin model, can be proved from the basic axioms. But this is not much help if we have many models, and unfortunately the Löwenheim–Skolem theorem also still applies: if we have one infinite model, we have many, many others. So not only do we have the countable ‘definable reals’ and the reals themselves as models but larger ‘hyperreals’ and ‘superreals’ with many more elements to them.
Skolem’s paradox
In fact, Henkin semantics can behave much worse than standard first order logic, as it can express more. Express more—but ultimately, not mean more. For instance, in second order language, we can express the sentence “there exists an uncountable set”. We could start by defining an infinite set as one with a one-to-one correspondence with a strict subset of itself, á la Cantor. We could define an uncountable infinite set as one that has a subset that is also infinite, but that doesn’t have a one-to-one correspondence with it (the subset is of lower cardinality). There are other, probably better, ways of phrasing the same concept, but that will do for here.
Then basic second order logic with Henkin semantics and the additional axiom “there exists an uncountable set” certainly has a model: the reals, for instance. Then by the Löwenheim–Skolem theorem, it must have a countable model.
Wait a moment there. A logic that asserts the existence of an uncountable set… has a countable model? This was Skolem’s paradox, and one of his arguments against first order logic. The explanation for the paradox involves those one-to-one correspondences mentioned above. An uncountable set is an infinite set without any one-to-one functions to any countable set. But in a Henkin model ‘any one-to-one function’ means ‘any one to one function on the list of allowable functions in this model’. So the ‘uncountable set’ in the countable model is, in fact, countable: it has one-to-one functions to other countable set. But all these functions are banned from the Henkin model, so the model cannot see, internally, that that set is actually countable.
So we can express a lot of statements in Henkin semantics—“every bounded set has a supremum”, “there exists an uncountable set”—but these don’t actually mean what we thought they did.
Full second order semantics
Having accepted the accusations of sneaking in set theory, and the disturbing fact that we had to bring in meaning and semantics (by excluding a lot of potential models), rather than relying on the syntax… what can we do with full second order semantics?
Well, for start, finally nail down the natural numbers and the reals. With second order Peano arithmetic, including the second order induction axiom “every (non-empty) set of numbers has a least element”, we know that we have only one (full) model: the natural numbers. Similarly, if we have the axioms for an ordered field and toss in “every bounded set has a supremum”, then the reals are the only full model that stands up.
This immediately implies that full second order semantics are not complete, unlike Hekin semantics and first order theories. We can see this from the incompleteness result (though don’t confuse incompleteness with non-completeness). Take second order Peano arithmetic. This has a Gödel statement G which is true but unprovable. But there is only one model of second order Peano arithmetic! So G is both unprovable and true in every model for the theory.
It may seem surprising that completeness fails for full semantics: after all, it is true in Henkin semantics, and every full model is also a Henkin model, so how can this happen? It derives from the restriction of possible models: completeness means that every sentence that is true in every Henkin model, must be provable. That does not mean that every sentence that is true in every full model, must also be provable. The G sentence is indeed false in some models—but only in Henkin models that are not full models.
The lack of completeness means that the truths of second order logic cannot be enumerated—it has no complete proof procedure. This causes some to reject full second order logic on these grounds. Others argued that completeness is not the important factor, but rather decidability: listing all the provable statements might be light entertainment, but what we really want is an algorithm to be able to prove (or disprove) any given sentence. But the Church-Turing theorem demonstrates that this cannot be done, in either first or second order logic: hence neither system can claim to be superior in this respect.
Higher-order logic within full second order logic
Higher order logic is the next step up—quantifying over predicates of predicates, functions of functions. This would seem to make everything more complicated. However there is a result due to Hintikka that any sentence in full higher order logic can be shown to be equivalent (in an effective manner) with a sentence in full second order logic, using many-sorting. So there is, in a certain sense, no need to go beyond, and the important debate is between first order and full second order logic.
Conclusion
So, which logic is superior? It depends to some extent on what we need it for. Anything provable in first order logic can be proved in second order logic, so if we have a choice of proofs, picking the first order one is the better option. First order logic has more pleasing internal properties, such as the completeness theorem, and one can preserve this in second order via Henkin semantics without losing the ability to formally express certain properties. Finally, one needs to make use of set theory and semantics to define full second order logic, while first order logic (and Henkin semantics) get away with pure syntax.
On the other hand, first order logic is completely incapable of controlling its infinite models, as they multiply, uncountable and generally incomprehensible. If rather that looking at the logic internally, we have a particular model in mind, we have to use second order logic for that. If we’d prefer not to use infinitely many axioms to express a simple idea, second-order logic is for us. And if we really want to properly express ideas like “every (non-empty) set has a least element”, “every analytic function is uniquely defined by its power series”—and not just express them, but have them mean what we want them to mean—then full second order logic is essential.
EDIT: an addendum addresses the problem of using set theory (a first order theory) to define second order logic.
- Second order logic, in first order set-theory: what gives? by 23 Feb 2012 12:29 UTC; 19 points) (
- We won’t be able to recognise the human Gödel sentence by 5 Oct 2012 14:46 UTC; 11 points) (
- A Keynesian key insight by 19 Jun 2013 14:05 UTC; 10 points) (
- 11 Nov 2012 0:28 UTC; 0 points) 's comment on Logical Pinpointing by (
- 2 Nov 2012 2:37 UTC; 0 points) 's comment on Logical Pinpointing by (
Hold on a minute. What does it even mean to speak of proving something in second-order logic? First order has various deduction systems for it, which we usually don’t bother to mention by name because they’re all equivalent. How does one actually perform deductions in proofs in second-order logic? I was under the impression that second-order logic was purely descriptive (i.e. a language to write precise statements which then may judged true or false) and did not allow for deduction. After all, to perform deductions about sets, one will need some sort of theory of how the sets work, no? And then you may as well just do set theory—which is after all how we generally do things...
Proofs in second-order logic work the exact same way as proofs in first-order logic: You prove a sentence by beginning with axioms, and construct the sentence using inference rules. In first-order logic, you have individuals, functions (from individuals to individuals), terms, and predicates (loosely speaking, functions from individuals to truth values), but may only quantify over individuals. In second-order logic, you may also quantify over predicates and functions. In third-order logic, you add functions of functions of individuals, and, in fourth-order logic, the ability to quantify over them. This continues all the way up to omega-eth order logic (a.k.a.: higher-order logic), where you may use functions of arbitrary orders. The axioms and inference rules are the same as in first-order logic.
Wait, I said nothing about sets above. Sets are no problem: a set containing certain elements is equivalent to a predicate which only returns true on those elements.
I also lied about it being the same as first-order logic—a couple are added. One very useful axiom scheme is comprehension, which roughly means that you can find a predicate equivalent to any formula. You can think of this as being an axiom schema of first-order logic, except that, since it cannot be stated until fourth-order logic, it contains no axioms in first-order logic.
(My formal logic training is primarily in Church’s theory of types, which is very similar to higher-order logic [though superficially extremely different]. I probably mixed-up terminology somewhere in the above.)
The comprehension axiom schema (or any other construction that can be used by a proof checker algorithm) isn’t enough to prove all the statements people consider to be inescapable consequences of second-order logic. You can view the system you described as a many-sorted first-order theory with sets as one of the sorts, and notice that it cannot prove its own consistency (which can be rephrased as a statement about the integers, or about a certain Turing machine not halting) for the usual Goedelian reasons. But we humans can imagine that the integers exist as “hunks of Platoplasm” somewhere within math, so the consistency statement feels obviously true to us.
It’s hard to say whether our intuitions are justified. But one thing we do know, provably, irrevocably and forever, is that being able to implement a proof checker algorithm precludes you from ever getting the claimed benefits of second-order logic, like having a unique model of the standard integers. Anyone who claims to transcend the reach of first-order logic in a way that’s relevant to AI is either deluding themselves, or has made a big breakthrough that I’d love to know about.
Indeed, since the second-order theory of the real numbers is categorical, and since it can express the continuum hypothesis, an oracle for second-order validity would tell us either that CH or ¬CH is ‘valid’.
(“Set theory in sheep’s clothing”.)
If I understand correctly, you are saying that higher order logic cannot prove all theorems about the integers that ZFC can. That’s a very uninteresting statement. Since higher order logic was proven consistent* with ZFC, it is strictly weaker. Second order logic, is, of course, strictly weaker than 4th order logic, which is strictly weaker than 6th order logic, and so on, and is thus much weaker than higher-order logic.
I’ve never heard it claimed that second-order logic has a unique model of the standard integers. Actually, I’ve never heard of the standard integers—do you mean a unique standard model of the integers? I hope not, as we actually can get a unique standard model of the integers in a system with machine-checkable proofs. See “To Truth Through Proof” by Peter Andrews. It’s omitted in the Google Books preview, but I believe the proof is on page 327; i lent out my copy and cannot check. (It, naturally, does not have a unique general model, so this not of huge practical significance.)
* I have not actually read that, and should not really say that. However, type theory with an axiom of infinity, which is strictly stronger than type theory without an axiom of infinity, which can express all statements of higher-order logic, has been proven consistent. Also, any proof in higher order logic can be trivially converted into a proof of nth-order logic for some n, which can be shown consistent in (n+2)th-order logic.
We don’t seem to understand each other yet :-(
The argument wasn’t specific to ZFC or HOL, it was intended to apply to any system that can have a proof checker.
Sorry, I screwed up the wording there, meant to say simply “unique model of the integers”. People often talk about “standard integers” and “nonstandard integers” within models of a first-order axiom system, like PA or ZFC, hence my screwup. “Standard model” seems to be an unrelated concept.
The original post says something like that, search for “we know that we have only one (full) model”.
Pointing out the Gödelian limitations of all systems with recursively enumerable axioms doesn’t seem like much of criticism of the system of nth-order logic I mentioned. Now I have less of an idea of what you’re trying to say.
By the way, I think he’s using “full model” to mean “standard model.” Presumably, the standard integers are the standard model that satisfies the Peano axioms, while nonstandard integers are any other satisfying model (see http://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic ).
Not much, admittedly :-) The interesting question to me is how to make a computer understand “integers” the way we seem to understand them. It must be possible because humans are not magical, but second-order logic doesn’t seem to help much.
Computers can prove everything about integers that we can. I don’t see a problem here.
If the human and the computer are using the same formal system, then sure. But how does a human arrive at the belief that some formal system, e.g. PA, is “really” talking about the integers, and some other system, e.g. PA+¬Con(PA), isn’t? Can we formalize the reasoning that leads us to such conclusions? Pointing out that PA can be embedded in some larger formal system, e.g. ZFC, doesn’t seem to be a good answer because that larger formal system will need to be justified in turn, leading to an infinite regress. And anyway humans don’t seem to be born with a belief in ZFC, so something else must be going on.
We form beliefs about mathematics the same way we form beliefs about everything else: heuristic-based learning algorithms. We typically accept things based on intuition and inductive inference until trained to rely on proof instead. There is nothing stopping a computer from forming mathematical beliefs based on statistical inference rather than logical inference.
Have a look at experimental mathematics or probabilistic number theory for some related material.
Saying “heuristic-based learning algorithms” doesn’t seem to compress probability mass very much. It feels like skipping over the mysterious part. How exactly do we write a program that would arrive at the axioms of PA by using statistics? I think if we did that, then the program probably wouldn’t stop at PA and would come up with many other interesting axioms, so it could be a useful breakthrough.
Yes, but the point is that we are learning features from empirical observations, not using some magic deduction system that our computers don’t have access to. That may only be one bit of information, but it’s a very important bit. This skips over the mysterious part in the exact same way that “electrical engineering” doesn’t answer “How does a CPU work?”—it tells you where to look to learn more.
I know far less about empirical mathematics than about logic. The only thing along these lines I’m familiar with is Douglas Lenat’s Automated Mathematician (which is only semi-automated). A quick search for “automated mathematician” on Google Scholar gives a lot of more recent work, including a 2002 book called “Automated theory formation in pure mathematics.”
Well, human beings are abductive and computational reasoners anyway. I think our mental representation of the natural numbers is much closer to being the domain-theoretic definition of the natural numbers as the least fixed point of a finite set of constructors.
Note how least fixed-points and abductive, computational reasoning fit together: a sensible complexity prior over computational models is going to assign the most probability mass to the simplest model, which is going to be the least-fixed-point model, which is the standard model (because nonstandard naturals will require additional bits of information to specify their constructors).
A similar procedure, but with a coinductive (greatest-fixed-point) definition that involves naturals as parameters to the data constructors, will give you the real numbers.
There exist deductive systems for second order logic.
Right. There exist deductive systems, plural. Are they equivalent, like the ones for first-order logic are? As I understand it, If you want to do deduction in second-order logic, you need to specify a deductive system; you can’t just do deduction in second-order logic alone. Whereas in first-order logic there’s no need to specify the deductive system because they’re all equivalent.
Good question. I don’t know the answer. If they’re not equivalent, then I see your point.
The different deductive systems described there (can’t access the link, wikiped closed) all seem the same—they differ only in the axioms they use, which isn’t really a difference in deductive systems.
But the question is, starting from the same axioms—not logical axioms, not axioms of the deductive systems, but the axioms of whatever it is you’re trying to reason about—would they produce the same theorems?
Wikipedia is accessible if you disable JavaScript (or use a mobile app, or just Google cache).
If anyone is curious, I’m downvoting everyone in this thread—not only is this a terrible place to discuss SOPA and blackout circumventions (seriously, we can’t wait a day and get on with our lives?), there’s already a SOPA post in Discussion.
A policy that anyone who responds to comments from the ‘recent comments’ page can be expected to find contemptible.
Or just wait for the page to load, then click the Cross button before the blackout page loads.
The blackout is a lie.
Edit: Apparently, saying ‘the blackout is a lie’ offends people. I have no idea why that would be.
Rather, the blackout is not an absolute prevention of all access to data. They describe the workarounds they provide on their about page and explain the purpose of the inconvenience.
EDIT: Retraction only to prevent further gwernization.
The thing is… why even call it a ‘blackout’?
The intent is clearly to nod in the direction of the inconvenience of a world suddenly without Wikipedia, without actually making us suffer such a fate.
And, of course, if Wikipedia actually managed to find a way to successfully prevent people from reading wikipedia content for a day they would be massively undermining their own position. A big part of the argument against SOPA is “it will not work to prevent copyright infringement anyway!” An actual successful block on all wikipedia content would amount to “OK guys, this is how it is really done!”
Retraction to prevent further gwernization. I LIKE this comment.
Oh. It looks like I was confused about what Gödel’s first incompleteness theorem really says.
What the theorem does not say:
Why doesn’t the theorem say that? Because we can define a system PA^ consisting of PA and an extra axiom asserting the inconsistency of PA. PA^ is consistent in the sense that you can’t derive a contradiction from it (though note that it’s not sound, so don’t trust any result that PA^ “proves”).
But PA^ can prove its version of Not G. PA^ proves that PA^ can prove anything; hence PA^ proves that PA^ can prove G; hence PA^ proves Not G.
So why did I feel (possibly unfairly) that this post promoted my original confusion? It says this:
G doesn’t say that “PA cannot prove G without being inconsistent”. G says that “PA cannot prove G”. From the point of view of PA, those are not equivalent statements. So the sort of reasoning used in the article will work for PA but not for PA^ - we need to know something about the soundness of the system we are working in, not merely its consistency.
I’ve been thinking… How is it that we can meaningfully even think about full semantics second order logic of physics is computable?
What I mean is… if we think we’re talking about or thinking about full semantics? That is, if no explicit rule following computable thingy can encode rules/etc that pin down full semantics uniquely, what are our brains doing when we think we mean something when we mean “every” subset?
I’m worried that it might be one of those things that feels/seems meaningful, but isn’t. That our brains cannot explicitly “pin down” that model. So… what are we actually thinking we’re thinking about when we’re thinking we’re thinking about full semantics/”every possible subset”?
(Does what I’m asking make sense to anyone? And if so… any answers?)
Excellent post! It is well-written and pushes through a pain point. Good hook, also.
You have some weird periods in the paragraph beginning “The last two are interesting...”
They follow the numbers from his list, which also have periods after them. I think he was trying to show that he was referring to the items from his list, as opposed to the numbers themselves.
I think he should have used parentheses around each number instead.
I have now.
This may be the first time I’ve upvoted a mathematics-related post on LW.
“Every set of numbers has a least element” clearly does NOT define the natural numbers. Consider N U {-1}.
“Every set of numbers has a least element”—along with the other, non-inductive axioms of Peano arithmetic.
You might want to insert “non-empty”, though.
Done!
You should say “replace THEM”, in that case, to refer to the infinite set of axioms, as opposed to Peano Arithmetic.
Corrected