Reason to learn math deeply: by forcing you to master alternating quantifiers, it expands your ability to understand and handle complex arguments.
This falls, possibly, under your “developing better general reasoning skills”, but I would stress it separately, because I think it’s an especially transferrable skill that you get from learning rigorous math. Humans find chains of alternating quantifiers (statements like “for every x, there exists y, such that for every z...”) very difficult to process. Even at length 2, people without training often confuse the meanings of forall-exist and exist-forall. To get anywhere in rigorous math, a student needs to confidently handle chains of length 4-5 without confusion or undue mental strain. This is drilled into the student during the first 1-2 years of undergraduate rigorous math, starting most notably with the epsilon-delta formalism in analysis. The reason this formalism is notoriously difficult for many students to master is precisely that it trains and drills larger chains of quantifiers than the students have hithertoo been exposed to.
(other math-y subjects have their own analogues; for example, I think the chief reason the pumping lemma is taught in CS education is to force the same sort of training)
The benefit you get is not merely the ability to easily handle statements of this sort in other, non-math fields. It seems to be (personal speculation, no hard data) more general: your ability to understand complex descriptions and arguments with multiple “points of view” improves. Understanding chains of quantifiers demands calmly keeping track of several interrelated facts in your short “reasoning memory”, so to speak. With this ability improved, it’s easier to keep track of complex interrelated he said-she said-they said scenarios, what-if-x-then-y-then-z-but-not-w scenarios, and so on.
Reasons not to learn math deeply, assuming you’re not a research mathematician or going to become one:
It’s effectively bottomless (unless you actually become a mathematician, and even then in many subfields you remain mostly ignorant of). That is, you’ll never say “OK, now I understand the deep principles of why math is the way it is”. Rather, you’ll always be aware there’s fascinating depth beyond what you already know. This can be frustrating.
If you don’t use it, you’ll forget it in a few years. Maybe you’ll remember the basic definitions, but you’ll likely forget the deep results, and definitely their proofs. If you were emotionally invested in their beauty and thought them a valuable part of your mindscape, this will frustrate and dismay you.
Unlike with almost any other scientific discipline, you likely won’t be able to construct handwavy cool laymen-accessible metaphors of the deep stuff you found so fascinating. You can chat with outsiders about neutrons, chemical solutions, DNA or phonemes, but not about L-functions.
I do agree with the part about the quantifiers. This is, at least in theory, one of the reasons that we are supposed to teach the epsilon-delta definition of limit in college calculus courses. I generally try to frame it as a game between the prover and the skeptic, see for instance the description here. One of the main difficulties that students have with the definition is staying clear of whose strategic interest lies in what, for instance, who should be the one picking the epsilon, and who should be the one picking the delta (the misconceptions on the same page highlight common mistakes that students make in this regard).
Incidentally, this closely connects with the idea of steelmanning: in a limit proof or other mathematical proof showing that a definition involving quantifiers is satisfied, one needs to demonstrate that for all the moves of one’s opponent, one has a winning strategy to respond to the best move the opponent could possibly make.
The first time I taught epsilon-delta definition in a (non-honors) calculus class at the University of Chicago, even though I did use the game setup, almost nobody understood it. I’ve had considerably more success in future years, and it seems like students get something like 30-50% of the underlying logic on average (I’m judging based on their performance on hard conceptual multiple choice questions based on the definition).
Probably? But the number of people who study formal logic to the required degree is dwarfed by the number of people who need this skill.
Also, mathematical logic, studied properly, is hard. It forces you to conceptualize a clean-cut break between syntax and semantics, and then to learn to handle them separately and jointly. That’s a skill many mathematicians don’t have (to be fair, not because they couldn’t acquire it, they absolutely could, but because they never found it useful).
I have a personal story. Growing up I was a math whiz, I loved popular math books, and in particular logical puzzles of all kinds. I learned about Godel’s incompleteness from Smullyan’s books of logical riddles, for example. I was also fascinated by popular accounts of set theory and incompleteness of the Continuum Hypothesis. In my first year at college, I figured it was time to learn this stuff rigorously. So, independent of any courses, I just went to the math library and checked out the book by Paul Cohen where he sets out his proof of CH incompleteness from scratch, including first-order logic and axiomatic set theory from first principles.
I failed hard. It felt so weird. I just couldn’t get through. Cohen begins with setting up rigorous definitions of what logical formulas and sentences are, I remember he used the term “w.f.f.-s” (well-formed formulas), which are defined by structural induction and so on. I could understand every word, but it was as if my mind went into overload after a few paragraphs. I couldn’t process all these things together and understand what they mean.
Roll forward maybe a year or 1.5 years, I don’t remember. I’m past standard courses in linear algebra, analysis, abstract algebra, a few more math-oriented CS courses (my major was CS). I have a course in logic coming up. Out of curiosity, I pick up the same book in the library and I am blown away—I can’t understand what it was that stopped me before. Things just make sense; I read a chapter or two leisurely until it gets hard again, but different kind of hard, deep inside set theory.
After that, whenever I opened a math textbook and saw in the preface something like “we assume hardly any prior knowledge at all, and our Chapter 0 recaps the very basics from scratch, but you will need some mathematical maturity to read this”, I understood what they meant. Mathematical maturity—that thing I didn’t have when I tried to read a math logic book that ostensibly developed everything from scratch.
I think this notion of “mathematical maturity” is hard to grasp for a beginning student.
I had a very similar experience. Introduction to (the Russian edition of) Fomenko & Fuchs “Homotopic topology” said that “later chapters require higher level of mathematical culture”. I thought that this was just a weasel-y way to say “they are not self-contained”, and disliked this way of putting it as deceptive. Now, a few years later I know fairly well what they meant (although, alas, I still have not read those “later chapters”).
I wonder if there is a way to explain this phenomenon to those who have not experienced it themselves.
Interesting off-topic fact about Fomenko—I’d read his book on symplectic geometry, and then discovered he’s a massive crackpot). That was a depressing day.
He is a massive crackpot in “pseudohistory”, but he is also a decent mathematician. His book in symplectic geometry is probably fine, so unless you are generally depressed by the fact that mathematicians can be crackpots in other fields, I don’t think you should be too depressed.
Your point 1 resonates with me. Learning math has steadily increased my effectiveness as a scientist/engineer/programmer. Sometimes just knowing a mathematical concept exists and roughly what it does is enough to give you an edge in solving a problem—you can look up how to do it in detail when you need it. However, despite the fact that life continues to demonstrate to me the utility of knowing the math that I’ve learned, this has failed to translate into an impulse within me to actively learn more math. Pretty much at any time in the past I’ve felt like I knew “enough” math, and yet always see a great benefit when I learn more. You’d think this would sink in, you’d think I would start learning math for its own sake with the implicit expectation that it will very probably come in handy, but it hasn’t.
Reason to learn math deeply: by forcing you to master alternating quantifiers, it expands your ability to understand and handle complex arguments.
This falls, possibly, under your “developing better general reasoning skills”, but I would stress it separately, because I think it’s an especially transferrable skill that you get from learning rigorous math. Humans find chains of alternating quantifiers (statements like “for every x, there exists y, such that for every z...”) very difficult to process. Even at length 2, people without training often confuse the meanings of forall-exist and exist-forall. To get anywhere in rigorous math, a student needs to confidently handle chains of length 4-5 without confusion or undue mental strain. This is drilled into the student during the first 1-2 years of undergraduate rigorous math, starting most notably with the epsilon-delta formalism in analysis. The reason this formalism is notoriously difficult for many students to master is precisely that it trains and drills larger chains of quantifiers than the students have hithertoo been exposed to.
(other math-y subjects have their own analogues; for example, I think the chief reason the pumping lemma is taught in CS education is to force the same sort of training)
The benefit you get is not merely the ability to easily handle statements of this sort in other, non-math fields. It seems to be (personal speculation, no hard data) more general: your ability to understand complex descriptions and arguments with multiple “points of view” improves. Understanding chains of quantifiers demands calmly keeping track of several interrelated facts in your short “reasoning memory”, so to speak. With this ability improved, it’s easier to keep track of complex interrelated he said-she said-they said scenarios, what-if-x-then-y-then-z-but-not-w scenarios, and so on.
Reasons not to learn math deeply, assuming you’re not a research mathematician or going to become one:
It’s effectively bottomless (unless you actually become a mathematician, and even then in many subfields you remain mostly ignorant of). That is, you’ll never say “OK, now I understand the deep principles of why math is the way it is”. Rather, you’ll always be aware there’s fascinating depth beyond what you already know. This can be frustrating.
If you don’t use it, you’ll forget it in a few years. Maybe you’ll remember the basic definitions, but you’ll likely forget the deep results, and definitely their proofs. If you were emotionally invested in their beauty and thought them a valuable part of your mindscape, this will frustrate and dismay you.
Unlike with almost any other scientific discipline, you likely won’t be able to construct handwavy cool laymen-accessible metaphors of the deep stuff you found so fascinating. You can chat with outsiders about neutrons, chemical solutions, DNA or phonemes, but not about L-functions.
I do agree with the part about the quantifiers. This is, at least in theory, one of the reasons that we are supposed to teach the epsilon-delta definition of limit in college calculus courses. I generally try to frame it as a game between the prover and the skeptic, see for instance the description here. One of the main difficulties that students have with the definition is staying clear of whose strategic interest lies in what, for instance, who should be the one picking the epsilon, and who should be the one picking the delta (the misconceptions on the same page highlight common mistakes that students make in this regard).
Incidentally, this closely connects with the idea of steelmanning: in a limit proof or other mathematical proof showing that a definition involving quantifiers is satisfied, one needs to demonstrate that for all the moves of one’s opponent, one has a winning strategy to respond to the best move the opponent could possibly make.
The first time I taught epsilon-delta definition in a (non-honors) calculus class at the University of Chicago, even though I did use the game setup, almost nobody understood it. I’ve had considerably more success in future years, and it seems like students get something like 30-50% of the underlying logic on average (I’m judging based on their performance on hard conceptual multiple choice questions based on the definition).
Couldn’t you develop the same skill more efficiently by just studying formal logic?
Probably? But the number of people who study formal logic to the required degree is dwarfed by the number of people who need this skill.
Also, mathematical logic, studied properly, is hard. It forces you to conceptualize a clean-cut break between syntax and semantics, and then to learn to handle them separately and jointly. That’s a skill many mathematicians don’t have (to be fair, not because they couldn’t acquire it, they absolutely could, but because they never found it useful).
I have a personal story. Growing up I was a math whiz, I loved popular math books, and in particular logical puzzles of all kinds. I learned about Godel’s incompleteness from Smullyan’s books of logical riddles, for example. I was also fascinated by popular accounts of set theory and incompleteness of the Continuum Hypothesis. In my first year at college, I figured it was time to learn this stuff rigorously. So, independent of any courses, I just went to the math library and checked out the book by Paul Cohen where he sets out his proof of CH incompleteness from scratch, including first-order logic and axiomatic set theory from first principles.
I failed hard. It felt so weird. I just couldn’t get through. Cohen begins with setting up rigorous definitions of what logical formulas and sentences are, I remember he used the term “w.f.f.-s” (well-formed formulas), which are defined by structural induction and so on. I could understand every word, but it was as if my mind went into overload after a few paragraphs. I couldn’t process all these things together and understand what they mean.
Roll forward maybe a year or 1.5 years, I don’t remember. I’m past standard courses in linear algebra, analysis, abstract algebra, a few more math-oriented CS courses (my major was CS). I have a course in logic coming up. Out of curiosity, I pick up the same book in the library and I am blown away—I can’t understand what it was that stopped me before. Things just make sense; I read a chapter or two leisurely until it gets hard again, but different kind of hard, deep inside set theory.
After that, whenever I opened a math textbook and saw in the preface something like “we assume hardly any prior knowledge at all, and our Chapter 0 recaps the very basics from scratch, but you will need some mathematical maturity to read this”, I understood what they meant. Mathematical maturity—that thing I didn’t have when I tried to read a math logic book that ostensibly developed everything from scratch.
I think this notion of “mathematical maturity” is hard to grasp for a beginning student.
I had a very similar experience. Introduction to (the Russian edition of) Fomenko & Fuchs “Homotopic topology” said that “later chapters require higher level of mathematical culture”. I thought that this was just a weasel-y way to say “they are not self-contained”, and disliked this way of putting it as deceptive. Now, a few years later I know fairly well what they meant (although, alas, I still have not read those “later chapters”).
I wonder if there is a way to explain this phenomenon to those who have not experienced it themselves.
Interesting off-topic fact about Fomenko—I’d read his book on symplectic geometry, and then discovered he’s a massive crackpot). That was a depressing day.
He is a massive crackpot in “pseudohistory”, but he is also a decent mathematician. His book in symplectic geometry is probably fine, so unless you are generally depressed by the fact that mathematicians can be crackpots in other fields, I don’t think you should be too depressed.
Yes.
Your point 1 resonates with me. Learning math has steadily increased my effectiveness as a scientist/engineer/programmer. Sometimes just knowing a mathematical concept exists and roughly what it does is enough to give you an edge in solving a problem—you can look up how to do it in detail when you need it. However, despite the fact that life continues to demonstrate to me the utility of knowing the math that I’ve learned, this has failed to translate into an impulse within me to actively learn more math. Pretty much at any time in the past I’ve felt like I knew “enough” math, and yet always see a great benefit when I learn more. You’d think this would sink in, you’d think I would start learning math for its own sake with the implicit expectation that it will very probably come in handy, but it hasn’t.
Thanks for the thoughtful and insightful comment. I really appreciate it :)