Book Review: Computability and Logic
I’m reviewing the books on the MIRI course list. After putting down Model Theory partway through I picked up a book on logic. Computability and Logic, specifically.
Computability and Logic
This book is not on the MIRI course list. It was recommended to me by Luke along with a number of other books as a potential way to learn provability logic.
Computability and Logic is a wonderful book. It’s well written. It’s formal, but pulls off a conversational tone. It demonstrates many difficult concepts with ease. It even feels nice — it’s got thick pages, large text, and a number of useful diagrams.
That said, I didn’t find it very useful to me personally.
This book is a wonderful introduction to computability, incompleteness, unsatisfiability, and related concepts. It masterfully motivates the connection between computability and logic (a subject near and dear to my heart). It could be an invaluable resource for anyone in computer science looking to branch out into logic. It starts with the basic concept of enumeration and takes you all the way through Löb’s theorem: quite an impressive feat, for one textbook.
For me, though, it was on the easy side. I already knew all the computability stuff quite well, and skimmed over much of it. The logic sections were a good refresher, though they were somewhat rudimentary by comparison to Model Theory. (Actually, this book would have been a great precursor to Model Theory: It spent quite a bit of time motivating and fleshing out concepts that Model Theory dumps on your head.)
Still, while this book was not exactly what I needed, I highly recommend it for other purposes. Its contents are summarized below.
Contents
Enumerability
This chapter introduces enumerability, and some of the cooler results. If any of the below sounds surprising to you, you’d be well served by reading this chapter:
If there’s a one-to-one mapping between two sets, those sets are “the same size”. By this measure, the following sets are the same size:
The natural numbers 0, 1, 2, 3, … (We call any one-to-one mapping between a set and the natural numbers an “enumeration”.)
All integers: we can enumerate them 0, 1, −1, 2, −2, …
All even numbers: we can enumerate them 0, 2, 4, 6, …
All fractions. See if you can find your own enumeration of the fractions — It’s a fun exercise, if you’ve never done it before.
This leads to a discussion of the “size” of infinite sets, which can be somewhat surprising the first time through. Knowledge of enumerability is central to many issues in computer science, and is integral in any study of infinities. I highly recommend you familiarize yourself with these concepts at some point in your life, if only for fun.
Diagonalization
When you realize how much can be enumerated (there’s a one-to-one mapping between natural numbers and all finite subsets of the natural numbers!) it’s easy to get excited, and feel like you can enumerate anything.
But you can’t.
Diagonalization stops the buck. It’s a tool for finding a set that defies a given encoding. Diagonalization is incredibly important if you want to play with infinities or learn about decidability. If you haven’t yet seen diagonalization, you’re in for a treat.
I won’t ruin the fun here: The second chapter of Computability and Logic is a clever and easy introduction to diagonalization, and I highly recommend it to newcomers. Again, if this sounds new, I highly recommend picking up Computability and Logic: you’re in for a treat.
Turing Computability
This chapter introduces Turing Computability, which is literally the bedrock of computer science. We focus on the question of what can (in principle) be computed. For those of you who don’t know, the Turing Machine is an idealized computation engine which can compute answers to a very large class of problems.
If you’re not familiar with the concept, this chapter is a great introduction. Even if you’re vaguely familiar with the concept, but you’re not quite sure how a Turing machine works (“wait, it only has a fixed number of states?”), then this chapter is a great way to beef up your understanding. It walks you through the implementation of a Turing machine, and shows you a number of clever algorithms by which a Turing machine computes simple algorithms. You’ll find yourself actually walking through the actions of the machine in your head, which is a great way to get a feel for the idealized basics of computation.
Again, if any of this sounds new to you, I highly recommend picking up this book and reading the first few chapters.
Uncomputability
After seeing Turing machines, it is again easy to grow overconfident and feel like you can compute anything.
As before, you can’t.
As a matter of fact, the proof that you can’t compute everything with a Turing machine is exactly the same as the proof that you can’t enumerate every set. It turns out Turing programs are enumerable (as is anything you can write out with symbols from a finite alphabet). So we can use the exact same technique — diagonalization — to construct problems that a Turing machine cannot solve.
Again, if any of this sounds foreign (or even if you’re just rusty), this chapter is a delightful introduction to uncomputability. For example, it constructs the halting problem from a diagonalization of an encoding of Turing machine instructions. This smoothly unifies diagonalization with the intuitive impossibility of the halting problem. The chapter even touches on the busy beaver problem. I can’t do it justice in a few mere paragraphs: if any of this sounds fun, don’t hesitate to pick up this book.
Abacus Computability
This chapter introduces another formalism that has more machinery available to it than a Turing machine. If you don’t already know how this story goes, I won’t ruin the surprise. Again, this chapter is clever and fun to read. It has lots of diagrams and gives you a good feel for what sort of power an “abacus machine” (something that can do real math) has as a computational engine. If computability is a new field to you, you’ll enjoy this chapter.
Recursive Functions
We now step to the math side of things. This transition is motivated computationally: we discuss five functions that are trivial to compute, and which turn out to be quite powerful when taken together. Those functions are:
Zero: The function that always returns 0.
Successor: The function that, given n, returns n+1.
Identity: Functions that return something they are given.
Composition: The function that performs function composition.
These four functions are called the “primitive recursive” functions, and some time is spent exploring what they can do. (Answer: quite a bit. Readers of Gödel, Escher, Bach will recognize BlooP.) If we allow use of a fifth function:
Minimization: Given f, finds the arguments for which f returns 0.
we get the “recursive” functions. (Readers of Gödel, Escher, Bach will recognize this as unbounded search, and thus FlooP.)
These are a set of building blocks for some pretty interesting functions, and we are now firmly in math land.
Recursive Sets and Relations
The above definitions are extended to define recursive sets and relations. Generally speaking, we can define sets and relations by indicator functions which distinguish between elements that are in a set from elements that are not. Sets and relations are called “primitive recursive” if their indicator functions can be constructed from primitive recursive building blocks. They are “recursive” if their indicator functions can be constructed from recursive building blocks. They are called “semirecursive” if there is a recursive function which at least says “yes” to elements that are in the set, even if it cannot say “no” to elements that are not in the set.
Note that the indicator function for recursive sets must be total — i.e., must terminate — even though not all recursive functions are total. Also, note that semirecursive sets are more commonly known as “recursively enumerable” sets.
This may all seem a bit abstract, especially following all the computationally-motivated content above. However, these distinctions are good to know. This chapter is perhaps less fun than the others, but no less important. (It’s easy to get recursive and semirecursive sets mixed up, when you’re starting out.)
Equivalent Definitions of Computability
This is where the magic happens. This chapter shows that a function is recursive iff it is Turing computable. Turing machines and recursive functions are computational engines of precisely the same power. This is one of my favorite results in mathematics, and it’s a beautiful (and the original) bridge between the lands of math and computer science.
This chapter actually shows you how to build a recursive function that computes the result of a Turing machine, and how to encode recursive functions as Turing-machine instructions. Even if you know the results, you may find it useful (or just fun) to toy with an actual bridge between the two formalisms. That’s one thing I really like about this book: it doesn’t just say “and these are equivalent, because [brief proof]”. It actually shows you an encoding. It lets you see the guts of the thing.
If you’re casually interested in math or computer science (or if you need a brush up, or you just want a good time) then I highly recommend reading this book at least through chapter 8. It’s good clean fun to see these things play out in front of you, instead of just hearing the results secondhand.
A Précis of First-Order Logic: Syntax
We now dive in to the logic side of things. This chapter introduces the syntax of first order logic. It’s a pretty solid introduction, and a good way to brush up on the precise syntax if you’re feeling rusty.
A Précis of First-Order Logic: Semantics
Here we start getting into a little bit of model theory, binding the syntax of first order logic to semantics. This chapter would have been valuable before I started reading Model Theory. It is a solid introduction to the concepts.
The Undecidability of First-Order Logic
This chapter is dedicated to encoding a Turing machine as a logical theory. The fact that this can be done is incredibly cool, to say the least.
As before, the chapter provides actual examples for how to encode Turing machines as logical theories. You get to play with sentences that describe a specific Turing machine and the state of its tape. You get to see the chain of implications that represent the computation of the Turing machine. As before, this is an awesome way to get a hands-on feel for a theoretical result that is commonly acknowledged but seldom explored.
The end result, of course, is that you can encode the halting problem as a decision problem, and therefore the decision problem (checking whether a set of sentences Γ implies some sentence D) is uncomputable. However, this book doesn’t just tell you that result, it shows it to you. You actually get to build a set of sentences Γ encoding a Turing machine, and construct a sentence D that expresses “this Turing machine halts”. Even if you’re already familiar with the result, reading this chapter is quite a bit of fun. There’s something magical about seeing the connections between computation and logic laid bare before you.
Models
This chapter goes into basic model theory, discussing concepts like isomorphism, model size, and compactness. (Compactness is explained, but not proven.) This chapter would have been invaluable a month and a half ago, before I started Model Theory. If you’re interested in model theory, this is a great place to get a feel for the ideas. It illustrates the basic concepts that I tried to cover in my very basic model theory post, but it covers them in a much more complete and approachable manner.
The Existence of Models
This chapter is entirely devoted to a proof of the compactness theorem. The chapter walks you through the whole thing and makes it easy. Again, this would have been good preliminaries for Model Theory, in which compactness is proved about a paragraph. I personally found this chapter a bit slow, but I imagine it would be useful to someone new to the idea of compactness.
Proofs and Completeness
This chapter is about formalizing a system of proofs. It also introduces the concepts of soundness and completeness. Readers of Gödel, Escher, Bach and others familiar with the subject matter will see where this is going, and the chapter very much feels like it’s setting up all the right pieces and putting things in place.
If you’re new to the ideas of arithmetization and representability, then you don’t want to read the next three chapters without reading this. Otherwise, it’s skippable.
Arithmetization
This chapter develops an arithmetization of the syntax first-order logic, and of the proof system sketched in the previous chapter. This involves developing a numeric encoding of first-order formulas, and recursive functions that can manipulate formulas so encoded. If you’ve never experienced such an encoding before, this chapter is a fun read: otherwise, it’s skippable.
Representability of Recursive Functions
This chapter shows how any recursive function can be expressed as a formula in the language of arithmetic (a theory of first-order logic). This closes our loop: all functions may be encoded as arithmetical formulas, which may themselves be encoded as numbers. This is the mechanism by which we will embed arithmetic in itself.
The chapter then discusses formal systems of arithmetic. PA is mentioned, but a more minimal (and easier to reason about) arithmetic is the focus of conversation.
Indefinability, Undecidability, and Incompleteness
This is the climax of the book. The previous three chapters have been leading here. We construct a function that does diagonalization, then we represent it as a formula, and show how, given any formula A(x)
, we can construct a sentence G ↔ A('G')
via some clever uses of our diagonalization function and the diagonalization formulas. From there, indefinability, undecidability, and incompleteness are just a hop, skip, and jump away.
Diagonalization in arithemtic is always a pleasure to run your mind over, in my experience. This is a result that’s assumed in many other texts (“and, by diagonalization, we can construct…”), but rarely explored in full. It’s quite fun to pop the thing open and see the little gears. Even if you’re pretty comfortable with diagonalization, you may enjoy this chapter for its crisp treatment of the device.
This chapter comes highly recommended, even if just for fun.
The Unprovability of Consistency
This chapter breaks down Löb’s theorem and proves it in detail. If you’ve made it this far in the book, Löb’s theorem won’t even seem difficult. If you already know Löb’s theorem well, this chapter is skippable. But if the theorem has always seemed somewhat confusing to you (even after the cartoon guide) then this chapter may well be enlightening.
Normal Forms
We now move to the “further topics” section of the book. This section was a bit faster, a bit less motivated, and more prone to dump proofs on you and say “isn’t that neat”.
First, we discuss a variety of normal forms into which we can put logical sentences, each of which makes particular proofs easier. For example, you can do everything with only ∃, ¬, and ∧. Alternatively, you can resolve to deal only with sentences where all the quantifiers come first. Or you can decide to work without function symbols, and so on.
It’s good to know your normal forms, this chapter is worth a read.
The Craig Interpolation Theorem
Craig’s interpolation theorem lets us construct a sentence that is “between” an implication and has some nice properties. (If A→C, then the Craig interpolation theorem lets us construct a B such that A→B, B→C, and B contains only logical symbols contained in both A and C.) This is a result used often in model theory, and a good tool to have in the toolbox. Read it if you’re curious, skip it if you’re not.
Monadic and Dyadic Logic
This chapter is pretty neat. It turns out that “monadic” logic (first order logic in a language logic with only one-place relation symbols) is decidable. However, it also turns out that “dyadic” logic (first order logic in a language with only two-place relation symbols) is undecidable. (We can actually make that stronger: a language in first order logic with exactly one two-place relation symbol is undecidable.)
This is interesting, because it shows us precisely where undecidability sets in. First order logic becomes undecidable when you add the first two-place relation symbol. This is a fun chapter to read if you feel like exploring the boundaries of undecidability.
The latter result is proved by showing that we can put any logic into a “normal form” where it has at most one two-place relation symbol (and no relation symbols of higher arity). This result was surprising, and the proof is a fun one.
Second-Order Logic
This chapter introduces second order logic, and shows some basic results (it’s not compact, etc.). It’s not a very in-depth introduction. If you’re interested in exploring second order logic, I’d recommend finding a text that focuses on it for longer than a chapter. This chapter left me a bit unsatisfied.
Arithmetical Decidability
This discusses some results surrounding the definability of truth in arithmetic: for example, the chapter shows that for each n
and any sane measure of complexity, the set of sentences of complexity at most n
which are true is arithmetically definable.
The results here were somewhat boring, in my opinion.
Decidability of Arithmetic without Multiplication
It turns out that arithmetic without multiplication is decidable. This is another interesting exploration of the boundaries of undecidability. The result is shown in this chapter by elimination of quantifiers: this is nice, but I feel like the chapter could have had more impact by exploring the mechanism by which multiplication brings about undecidability to arithmetic. I plan to explore this subject more in my free time.
Nonstandard Models
This section discusses nonstandard models of arithmetic, and the usual wacky results. If you’ve never played with nonstandard models before, this chapter is a nice introduction.
Ramsey’s Theorem
This chapter introduces Ramsey’s theorem (a result in combinatorics) and uses it to obtain an undecidable sentence that doesn’t do any overt diagonalization. This chapter is kind of fun, and it’s nice to see an undecidable sentence that doesn’t come from direct self-reference trickery, but you won’t miss much if you skip it.
Modal Logic and Provability
This chapter introduces provability logic (and is the reason I picked up this textbook, initially). I was somewhat disappointed with it: it introduces modal logic well enough, but it just sort of dumps a lot of proofs and results on you. This is all well and good, but I was expecting much more detail and motivation, given the impressive motivation found elsewhere in the book. Overall, this chapter felt somewhat rushed.
That concludes the summary. As a minor note, this textbook had way more typos than any other textbook I’ve read. There was one every few chapters. It was especially bad in the “Further Topics” section (ch. 19 and up), where there was a typo every chapter or so. However, the writing quality is otherwise top-notch, so I won’t complain too loudly.
Who should read this?
This book comes highly recommended, especially to people just getting interested in the fields of computability and logic. If you’re interested in computation or logic, this book introduces some of the coolest results in both fields in a very approachable way.
This book excels at showing you the actual mechanisms behind results that are often mentioned but seldom exposed. Even if you know that the deduction problem is equivalent to the halting problem, it’s illuminating to play directly with an encoding of Turing machines as logical theories.
This book is a great way to shore up your understanding of some of the most fun proofs in computability theory and in logic. It would make an excellent companion to a computer science curriculum, and a great follow up to Gödel, Escher, Bach by someone hungry for more formalism.
That said, it’s not a good introduction to (modal) provability logic. Don’t pick it up for that reason, no matter what Luke tells you :-)
What should I read?
You should definitely read chapters 1-8 if you get the chance, especially if you’re not already familiar with the bridge between Turing machines and recursive functions.
Chapters 9-18 also come highly recommended if you want to really understand incompleteness and undecidability. Readers of Gödel, Escher, Bach will find these chapters to be familiar, albeit more formal. They’re a great way to brush up on your understanding of incompleteness, if you think you have to.
Chapter 18 in particular is quite relevant to some of MIRI’s recent work, and is good to know in the LessWrong circles.
Chapters 19-27 are optional. They’re less polished and less motivated, and more likely to just dump a proof on you. Read the ones that seem interesting, but don’t be afraid to skip the ones that seem boring.
Final notes
This book wasn’t the most useful book I’ve read in this series. I was already quite comfortable with most of these subjects. In fact, I planned to skim the first two sections (up through chapter 18): I only slowed down and gave the book more time when it turned out to be a lot of fun.
I highly recommend reading this book before Model Theory, if you’re planning to read both. Chapters 9-13 would have made Model Theory quite a bit easier to tackle.
This book is not on the MIRI course list, but I recommend putting it on the course list. The subject matter may start out a little basic for much of the audience (I expect people who approach the course list to already know enumerability, the halting problem, etc.), but chapters 9-18 are a good introduction to fields that are important to MIRI’s current research.
I mean, this book takes readers all the way through a proof of Löb’s theorem, and it does it at an easy pace. That’s no small feat. It also sets up Model Theory nicely and has a brief intro to modal logic. It feels a little easy, but that’s not a bad thing. I think this would be a great way to get people familiar with the type of logic problems they’ll have to be comfortable with if they’re going to tackle the rest of the course list (and eventually MIRI research).
If anyone else is thinking of reading the course list, Computability and Logic is a great place to start. It introduces you to all the right concepts. It may not be the fastest way to get up to speed, but it’s definitely entertaining.
- The mechanics of my recent productivity by 9 Jan 2014 2:30 UTC; 201 points) (
- Book Review: Linear Algebra Done Right (MIRI course list) by 17 Feb 2014 20:52 UTC; 57 points) (
- And My Axiom! Insights from ‘Computability and Logic’ by 16 Jan 2019 19:48 UTC; 42 points) (
- Book Review: Mathematics for Computer Science (Suggestion for MIRI Research Guide) by 22 Jul 2017 19:26 UTC; 22 points) (
As a sixteen-year old maths enthusiast, would this be readable with pretty much just secondary school knowledge?
Definitely. In fact, I think you’re one of the best audiences. By the time you’ve take some logic courses and a foundations of computing course, this book will be fun but not particularly new. This book is a great into to some of my favorite parts of math.
Excellent! Thank you very much then!
Funnily enough, christmas in’t too far away… Hm...
Edit: Got it! Can’t wait to start reading it.
A friend lent me this exact book recently, and while I followed the first chapter, I quickly gave up upon encountering the exercises, because I could not even do the first one. At all.
I don’t think I am stupid, so I hope you’re just underestimating the required mathematical thinking experience (if not actual background knowledge, none of which is assumed) required to get through this book. Mathematicians do so as a rule.
So that I can better update on this information, can you tell me what the first exercise is?
This is quite likely, especially for the beginning (which is mostly on computation, my area of expertise). That said, I’d guess that your difficulties stemmed from not knowing a good way to approach the problems, and being intimidated by terminology. I bet that if you work your way through a few of them with a friend, and get a feel for how these types of question are tackled, then the other exercises will seem much easier in short order.
I’m also more than happy to go over the early exercises in the book with you, if you want some assistance. PM me if you’re interested.
Oh, wow. I had the Sipser book for computability and complexity back in undergrad, and this sounds like what our prof should have assigned our honors section!
Which edition did you read? The image in the post is of the fifth edition, and some people (eg Peter Smith in his Teach Yourself Logic (§2.7 p24)) claim that the earlier editions by just Boolos and Jeffrey are better.
Cutland’s Computability and Mendelson’s Introduction to Mathematical Logic between them look like they cover everything in this one, and they are both in MIRI’s reading list. What is the advantage of adding Computability and Logic to them? (ie is it easier to start out with, does it cover some of the ground between them that both miss, or is it just good with alternatives?)
The fifth—I had not heard that. Thanks for the tip.
I bet the Computability and Logic books on the course list cover similar subject matter. I read this book instead on the recommendation of Luke: I wanted to read up on provability logic specifically, and this book came recommended (presumably because it has an explicit introduction to provability logic at the end). I am now following it up with another of Luke’s recommendations, which covers provability logic more specifically.
I should probably refrain from suggestions about the content of the course list until after I read the suggested books on Logic & Computability, but I was quite impressed by the way this book took you from zero to Löb’s theorem and made it all seem easy.
Awesome—thanks for the review!
Nitpick: the links in contents are broken. You need to link to
#Enumerability
instead of/Enumerability
, etc.Oops, fixed. Thanks.
Another book by George Boolos, Logic, Logic, and Logic presents (and solves) what he calls “the hardest logic puzzle ever.” The puzzle was created by Raymond Smullyan. Here it is, for those of you who like this sort of thing:
Here’s some more relevant information from the book:
Lurker, but I just had to post here. A version of this was posted on The Big Questions in July. Here is my answer, pasted from there (but modified to use the different names), which beats the given answer because it uses two questions.
(break for formatting)
These two are not equivalent. The first of these allows you to exploit the fact that some yes-no questions cannot be answered truthfully or falsely, allowing the answer above. The second does not, and implies that some answers from Random are neither true nor false.
Nice! Although, as you mention, your two-question solution doesn’t work given the constraints imposed by Boolos (or Smullyan). Still, very clever.
If the above is quoted correctly, the constraints imposed by Boolos are contradictory. It is not possible that 1) Random always answers truthfully or falsely and 2) Random always answers “da” or “ja”. So the fact that I had to throw out the latter constraint is of no import—since the constraints are contradictory, I had no choice but to throw one out.
Well, you don’t have to throw one out. You could interpret the constraints (which are quoted correctly, btw) as narrowing the set of legitimate questions to those which Random can answer both truthfully and falsely with “da” or “ja”. I’m not sure why Boolos would have that constraint in there unless it was to rule out the kind of solution you’re proposing. Still, I agree the problem is clumsily phrased.
Maybe the question could be patched by adding the possibility that the gods (including Random) don’t respond at all, but also changing Random’s behavior from “randomly answers truthfully or falsely” to “randomly says either ‘da’, ‘ja’ or nothing”. If you patch it in this way, I’m pretty sure a two-question solution isn’t possible.
ETA: Also, I don’t want to downplay the cleverness of your two-question solution. Whether or not it perfectly fulfills the constraints of the problem, it is still very nice.
The patched version requires three questions. In fact it requires asking three people—if you only ask two people any number of questions, Random could pretend that he is the other person and the other person is Random.
First of all, you need a question which lets you determine who any person B is, when asked of Truth.
You want to ask “if B is Truth, then yes, else if B is False, then no, else don’t answer”. Presumably, you can’t directly ask someone not to answer—they can only refuse to answer if they are Random or if yes/no would violate constraints, so this comes out to “if B is Truth, then yes, else if B is False, then no, else tell me if your answer to this question is ‘no’.”
Then you modify this question so that it can be asked either of Truth or False. As before, adding “tell me if the answer to Y is the truthfulness of your answer to this question” will produce a correct answer to a yes/no question from either True or False, so you get “tell me, in the case that B is not Random, if the answer to “if B is Truth, then yes, else no” is the truthfulness of the answer to your question, and in the case that B is Random, if your answer to this question is the negation of the truthfulness of your answer to this question’”.
You now have a question that can determine who one person is, when asked of either True or False.
(The version with da and ja is left as an exercise to the reader.)
Use that question to ask person 1 who person 3 is. Then use it to ask person 2 who person 3 is.
If 1 and 2 give the same answer, then they are either Truth or False giving an informative answer or Random imitating an informative answer. Either way you know the identity of 3, and you also know one non-Random person. Ask this non-Random person the question to find the identity of a second person, which gives you all three.
If 1 and 2 give different answers, then one of them is Random. In that case, you know that 3 is not Random, and you can ask 3 who 1 is. You now know the identity of 1, and you know whether 1 or 2 is Random; the non-Random one told you who 3 was, so you have the identity of 3 as well. This again gives you the identity of two people and so you have all three.
Boolos’ solution, presented without explanation. Figuring out why the solution works is itself a (much easier, but still non-trivial) logic puzzle:
Svefg dhrfgvba gb N: Qbrf qn zrna lrf vs naq bayl vs, lbh ner Gehr vs naq bayl vs O vf enaqbz?
Frpbaq dhrfgvba gur tbq (rvgure O be P) gung lbh xabj vf abg Enaqbz (svefg dhrfgvba tvirf lbh guvf vasbezngvba): Qbrf qn zrna lrf vs naq bayl vs Ebzr vf va Vgnyl?
Guveq dhrfgvba gb fnzr tbq nf frpbaq dhrfgvba: Qbrf qn zrna lrf vs naq bayl vs N vf Enaqbz?
I tried to find a good book on the mathematics (not the philosophy!) of second-order logic on my usual sources (like mathoverflow.net discussions), but so far they have rendered nothing. Given that, as I understand it, there is some interest on these forums in SOL, can anyone help me with a recommendation? Thanks.
van Dalen’s Logic and Structure has a chapter on second order logic, but it’s only 10 pages long.
Shapiro’s Foundations without Foundationalism has as its main purpose to argue in favour of SOL, I’ve only read the first two chapters which give philosophical arguments for SOL, which were quite good, but a bit too chatty for my tastes. Chapters 3 to 5 is where the actual logic lives, and I can’t say much about them.
Thanks, I’ll check them out.
Does anyone have the evolutions for this book?
I think you mean all infinite subsets of natural numbers.
I meant that you can enumerate all finite collections of natural numbers. I’ve updated the wording to make this more clear.
I think you mean “I think you mean all finite subsets of natural numbers” ;)
The power set of the natural numbers is the union of the finite subsets of the natural numbers and the infinite subsets of the natural numbers. The former is in bijective correspondence with N.* So if the infinite subset were in bijective correspondence with N, then N would be in bijective correspondence with its powerset. But this is not the case by Cantor’s theorem, so there cannot be a bijection between N and its infinite subsets.
*Using:
{}->[empty sum]+1=1
{1}->2^(1-1)+1=2
{2}->2^(2-1)+1=3
{1,2}->2^(1-1)+2^(2-1)+1=1+2+1=4
{2,3,5,7}->2^(2-1)+2^(3-1)+2^(5-1)+2^(7-1)+1=2+4+16+64+1 etc.
Also mentioned at http://lesswrong.com/lw/j8/the_crackpot_offer/
Ah, I was taking So8res to be talking about how, for any infinite subset of the natural numbers, there are bijections between that set and the natural numbers, but your interpretation (bijections between the natural numbers and the set of finite subsets of the natural numbers) makes much more sense. Retracted; thanks.