Programming has already been mentioned, but I’d like to note that different sorts of programming languages teach different things. Scheme-like languages cause a very different sort of thinking than C-like languages for example. That said, I think that shminux touched on the biggest lesson from programming. But it does produce other lessons like how to break tasks down into smaller, more manageable tasks, and how to investigate things that aren’t doing what they are supposed to do.
So now onto other areas:
Psychology and cognitive science. Learned a lot about heuristics, fallacies and biases. Learned also that they apply to me (although I’m not sure I’ve internalized that as much as I should). Also, learned the important lesson that a good way of understanding complex systems is by how/when they go wrong.
Set theory: Did a really good job of showing how even reasonable sounding premises can lead to contradictions quite quickly. (If there’s any general reason to not take Anselm-like arguments seriously it is this, aside from the specific issues with most of those sorts of arguments). Set theory also helps one see mathematics as a whole and see how different areas connect to each other. Also, big sets are big. Although I enjoyed math well before I studied set-theory, I first had the feeling of the numinous in a mathematical context when thinking about the cardinality of sets that can be made in ZFC. Almost embarrassingly, the first sets that really triggered this were sets produced simply using the axiom of substitution ( (N u P(N) u P(P(N)) u P(P(P(N))))....) (where N is the natural numbers and P is the powerset operaton) started triggering this feeling). After seeing all sorts of large cardinals, this now looks almost like a little child feeling awed from thinking about one thousand. Also, I used to be religious, and I suspect that one thing that helped become less religious was the fact that I found far more of the numinous in math and science than I did in religion.
Probability and combinatorics did a really good job teaching me how bad human intuition is about basic probability.
Astronomy taught me how just mind-bogglingly large the universe is. (Cue obvious Hitchhiker’s references.)
Set theory also helps one see mathematics as a whole and see how different areas connect to each other.
For what it’s worth, I strongly disagree. For a new student too much emphasis on foundations can be a major mental block when getting used to a new idea and especially a new circle of ideas. Set theory is used very informally in most of mathematics, as a notation. To learn more than this notation is mostly unnecessary for pure math, completely unnecessary for applications of math to other areas.
I first had the feeling of the numinous in a mathematical context
Set theory also helps one see mathematics as a whole and see how different areas connect to each other.
For what it’s worth, I strongly disagree. For a new student too much emphasis on foundations can be a major mental block when getting used to a new idea and especially a new circle of ideas. Set theory is used very informally in most of mathematics, as a notation. To learn more than this notation is mostly unnecessary for pure math, completely unnecessary for applications of math to other areas.
Well, the difference of mathematics from natural sciences that you need not only to build a good model of something, but also to describe it using a limited set of axioms (using proofs from there on).
For some people set theory is the area of mathematics which quickly reaches proofs that are accessible to our reasoning, but transcend our intuition. Sometimes even the notions used are easy to define formally but put strain on your imagination. For people inclined to mathematics this can be a powerful experience.
But that’s nothing special about set theory. I prefer to think that the role of mathematics (at least the best kinds of mathematics) is to correct and extend our intuition, not to “transcend” it. But the kind of powerful experiences you describe were available two thousand years before the invention of set theory, and they’re available all over modern math in areas that have nothing to do with set theory.
Not every student would benefit from learning set theory early beyond the universally needed understanding of injective/bijective mappings, but some would. It does depend on personality. It has some relation to cultural things.
“Role of mathematics” implies relatively long run; experience is felt in a very short run.
If you want to extend your intuition into an area nobody understands well, you often need to combine quite weak analogies and formal methods—because you need to do something to get any useful intuition.
There are many branches of science where you can get an amplified feeling of understanding somthing in the area where you don’t have working intuition. There are three culture-related questions, though. First, how much (true or fake) understanding of facts you get from the culture before you know the truth? Second, how much do you need to learn before you can understand a result surprising to you? Third, is it customary to show the easiest-to-understand surprising result early in the course?
Of course, for different people in different cultural environments different areas of maths or natural sciences will be best. But it does seem that for some people the easiest way to get an example of reasoning in intuitively incomprehensible (yet) area is to learn set theory from easily accessible sources.
I don’t agree. Math is not made out of sets in the same way that matter is made out of atoms. In terms of reductionism differential equations are more fundamental than sets.
In terms of reductionism differential equations are more fundamental than sets.
Would you care to give an argument for this? This strikes me as wildly implausible, and my default interpretation is as a rhetorical statement to the effect of “boo set theory!!”
I’ve never seen set theory reduced to differential equations. On the other hand, the reduction of analysis (including differential equations) to set theory is standard and classical.
There are a lot of phenomena—in mathematics, in the cosmos, and in everyday experience—that you cannot understand without knowing something about differential equations. There are hardly any phenomena that you can’t understand without knowing the difference between a cardinal and an ordinal number. That’s all I mean by “fundamental.”
But here is a joke answer that I think illustrates something. Differential equations govern most of our everyday experiences, including the experience of writing out the axioms for set theory and deducing theorems from them. And we can model differential equations in a first order theory of real numbers, which requires no set theory. A somewhat more serious point along these lines is made in some famous papers by Pour-El and Richards.
Is this a good way to think about set theory? Of course not. But likewise, the standard reduction to set theory does not illuminate differential equations. Boo set theory!
Like I suspected, this is rife with confusion-of-levels.
There are a lot of phenomena—in mathematics, in the cosmos, and in everyday experience—that you cannot understand without knowing something about differential equations. There are hardly any phenomena that you can’t understand without knowing the difference between a cardinal and an ordinal number. That’s all I mean by “fundamental.”
That’s like saying that you can get through life without knowing about atoms more easily than you can without knowing about animals, and so biology must be more fundamental than physics. Completely the wrong sense of the word “fundamental”.
Differential equations govern most of our everyday experiences, including the experience of writing out the axioms for set theory and deducing theorems from them.
This is a classic confusion of levels. It’s the same mistake Eliezer makes when he allows himself to talk about “seeing” cardinal numbers, and when people say that special relativity disproves Euclidean geometry, or that quantum mechanics disproves classical logic.
And we can model differential equations in a first order theory of real numbers, which requires no set theory
Your conception of “differential equations” is probably too narrow for this to be true. Consider where set theory came from: Cantor was studying Fourier series, which are important in differential equations.
But likewise, the standard reduction to set theory does not illuminate differential equations
...and nor does the reduction of biology to physics “illuminate” human behavior. That just isn’t the point!
Here is Pour-El and Richards. Here is a more recent reference that makes my claim more explicitly. Both are gated.
What about the rest of my comment?
I’m not sure what to say. You’ve accused me of “confusing levels,” but I’m exactly disputing the idea that sets are at a lower level than real numbers. Maybe I know how to address this:
But likewise, the standard reduction to set theory does not illuminate differential equations
...and nor does the reduction of biology to physics “illuminate” human behavior. That just isn’t the point!
I don’t know about human behavior, which isn’t much illuminated by any subject at all. But the reduction of biology to physics absolutely does illuminate biology. Here’s Feynman in six easy pieces:
Everything is made of atoms. That is the key hypothesis. The most important hypothesis in all of biology, for example, is that everything that animals do, atoms do. In other words, there is nothing that living things do that cannot be understood from the point of view that they are made of atoms acting according to the laws of physics. This was not known from the beginning: it took some experimenting and theorizing to suggest this hypothesis, but now it is accepted, and it is the most useful theory for producing new ideas in the field of biology.
You simply can’t say the same thing—even hyperbolically—about the set-theoretic idea that everything in math is a set, made up of other sets.
Yes, Hilbert’s 10th Problem was whether there was an algorithm for solving whether a given Diophantine equation has solutions over the integers. The answer turned out to be “no” and the proof (which took many years) in some sense amounted to showing that one could for any Turing machine and starting tape make a Diophantine equation that has a solution iff the Turing machine halts in an accepting state. Some of the results and techniques for doing that can be used to show that other classes of problems can model Turing machines, and that’s the context that Matiyasevich discusses it.
I’m not sure, presumably to “+*=01>” one adds a bunch of special functions. The “o-minimal approach” to differential geometry requires no sequences. Functions are encodable as definable graphs, so are vector fields.
As you note, for completeness reasons an actual o-minimal theory is not as strong as set theory; one has to smuggle in e.g. natural numbers somehow, maybe with sin(x). I could have made a less tendentious point with Godel numbering.
Again, this is meant as a kind of joke, not as a natural way of looking at sets. The point is that I don’t regard von Neumann’s {{},{{}},{{},{{}}}} as a natural way of looking at the number three, either.
Once you say that functions are definable graphs, you are on a slippery slope. If you want to prove something about “all functions”, you have to be able to quantify over all formulas. This means you have already smuggled natural numbers into the model without defining their properties well...
When you consider a usual theory, you are only interested in the formulas as long as you can write—not so here, if you want to say something about all expressible functions.
And studying (among other things) effects of smuggling natural numbers used to count symbols in formulas into the theory is one of the easy-to-reach interesting things in set theory.
About natural numbers—direct set representation is quite unnatural; underlying idea of well-ordered set is just an expression of the idea that natural numbers are the numbers we can use for counting.
The true all-mathematical value of set theory is, of course to be a universal measure of weirdness: if your theory can be modelled inside ZFC, you can stop explaining why it has no contradictions.
I’m not sure one is more or less fundamental than the other. It does seem fair to say that as far as differential equations are concerned a completely different foundational setting wouldn’t make any difference. So it isn’t analogous to reductionism in that the behavior isn’t brought about by the local interaction of pieces under the hood.
I’m not sure one is more or less fundamental than the other.
Really? You don’t think the demonstrable reducibility of other branches of mathematics to set theory means anything?
It does seem fair to say that as far as differential equations are concerned a completely different foundational setting wouldn’t make any difference.
This is actually a vacuous statement, because if it did make a difference, you wouldn’t call it “a completely different foundational setting” of the same subject. Similarly, it wouldn’t “make any difference” if, hypothetically, a 747 (as we understand it in terms of high-level properties) turned out to be made of something other than atoms; because by assumption the high-level properties of the thing we’re reducing are fixed.
The important point is whether something can be reduced, not whether it must be.
I really don’t know enough about programming to make a properly impressive analogy, but set theory is like a lower-level language or operating system on top of which other branches can be made to run.
I think the right analogy is not to building 747s out of parts, but to telling stories in different languages. The plot of “3 little pigs” has nothing to do with the English language, and the plot of Wiles’s proof of Fermat’s last theorem has nothing to do with set theory.
Really? You don’t think the demonstrable reducibility of other branches of mathematics to set theory means anything?
Not in any strong sense, no. I reduce things to other fundamental frameworks also. One could for example choose categories to be one’s fundamental objects and do pretty well. To extend your 747 analogy, this is closer to if we had two different 747s, one made of atoms and the other made from the four classical elements, and somehow for any 747 you could once it was assembled to decide to disassemble it into either atoms or earth, air, fire and water.
Epicycles are only a rough approximation that doesn’t work very well, and it doesn’t in any way give you Kepler’s third law (that there’s a relationship between the orbits). I’m also confused in that even if that were the case it wouldn’t make or Kepler or Newton’s mechanics worthless. What point are you trying to make?
Obviously Kepler’s astronomical model is superior and that line might have been rhetorical flourish but the Ptolemaic, Copernican and Tychonic models were by no means “rough approximations” that don’t “work very well”. Epicycles worked very well which is part of why it took so long to get rid of them- the deviations of theory from actual planetary paths were so small that they were only detectable over long periods of time or unprecedented observational accuracy (before Brahe).
(I don’t understand the grandparent’s point either and agree that mathematical reduction to set theory is a different sort of thing from physical reduction to quantum field theory—just pointing this one thing out.)
Programming has already been mentioned, but I’d like to note that different sorts of programming languages teach different things. Scheme-like languages cause a very different sort of thinking than C-like languages for example. That said, I think that shminux touched on the biggest lesson from programming. But it does produce other lessons like how to break tasks down into smaller, more manageable tasks, and how to investigate things that aren’t doing what they are supposed to do.
So now onto other areas:
Psychology and cognitive science. Learned a lot about heuristics, fallacies and biases. Learned also that they apply to me (although I’m not sure I’ve internalized that as much as I should). Also, learned the important lesson that a good way of understanding complex systems is by how/when they go wrong.
Set theory: Did a really good job of showing how even reasonable sounding premises can lead to contradictions quite quickly. (If there’s any general reason to not take Anselm-like arguments seriously it is this, aside from the specific issues with most of those sorts of arguments). Set theory also helps one see mathematics as a whole and see how different areas connect to each other. Also, big sets are big. Although I enjoyed math well before I studied set-theory, I first had the feeling of the numinous in a mathematical context when thinking about the cardinality of sets that can be made in ZFC. Almost embarrassingly, the first sets that really triggered this were sets produced simply using the axiom of substitution ( (N u P(N) u P(P(N)) u P(P(P(N))))....) (where N is the natural numbers and P is the powerset operaton) started triggering this feeling). After seeing all sorts of large cardinals, this now looks almost like a little child feeling awed from thinking about one thousand. Also, I used to be religious, and I suspect that one thing that helped become less religious was the fact that I found far more of the numinous in math and science than I did in religion.
Probability and combinatorics did a really good job teaching me how bad human intuition is about basic probability.
Astronomy taught me how just mind-bogglingly large the universe is. (Cue obvious Hitchhiker’s references.)
For what it’s worth, I strongly disagree. For a new student too much emphasis on foundations can be a major mental block when getting used to a new idea and especially a new circle of ideas. Set theory is used very informally in most of mathematics, as a notation. To learn more than this notation is mostly unnecessary for pure math, completely unnecessary for applications of math to other areas.
This kind of sentiment always reminds me of this.
Well, the difference of mathematics from natural sciences that you need not only to build a good model of something, but also to describe it using a limited set of axioms (using proofs from there on).
For some people set theory is the area of mathematics which quickly reaches proofs that are accessible to our reasoning, but transcend our intuition. Sometimes even the notions used are easy to define formally but put strain on your imagination. For people inclined to mathematics this can be a powerful experience.
But that’s nothing special about set theory. I prefer to think that the role of mathematics (at least the best kinds of mathematics) is to correct and extend our intuition, not to “transcend” it. But the kind of powerful experiences you describe were available two thousand years before the invention of set theory, and they’re available all over modern math in areas that have nothing to do with set theory.
Not every student would benefit from learning set theory early beyond the universally needed understanding of injective/bijective mappings, but some would. It does depend on personality. It has some relation to cultural things.
“Role of mathematics” implies relatively long run; experience is felt in a very short run.
If you want to extend your intuition into an area nobody understands well, you often need to combine quite weak analogies and formal methods—because you need to do something to get any useful intuition.
There are many branches of science where you can get an amplified feeling of understanding somthing in the area where you don’t have working intuition. There are three culture-related questions, though. First, how much (true or fake) understanding of facts you get from the culture before you know the truth? Second, how much do you need to learn before you can understand a result surprising to you? Third, is it customary to show the easiest-to-understand surprising result early in the course?
Of course, for different people in different cultural environments different areas of maths or natural sciences will be best. But it does seem that for some people the easiest way to get an example of reasoning in intuitively incomprehensible (yet) area is to learn set theory from easily accessible sources.
The construction of (other parts of) mathematics from set theory is a very important lesson in reductionism.
So important, in my view, that it outweighs the disadvantages of set theory that you often hear people complaining about.
I don’t agree. Math is not made out of sets in the same way that matter is made out of atoms. In terms of reductionism differential equations are more fundamental than sets.
Would you care to give an argument for this? This strikes me as wildly implausible, and my default interpretation is as a rhetorical statement to the effect of “boo set theory!!”
I’ve never seen set theory reduced to differential equations. On the other hand, the reduction of analysis (including differential equations) to set theory is standard and classical.
There are a lot of phenomena—in mathematics, in the cosmos, and in everyday experience—that you cannot understand without knowing something about differential equations. There are hardly any phenomena that you can’t understand without knowing the difference between a cardinal and an ordinal number. That’s all I mean by “fundamental.”
But here is a joke answer that I think illustrates something. Differential equations govern most of our everyday experiences, including the experience of writing out the axioms for set theory and deducing theorems from them. And we can model differential equations in a first order theory of real numbers, which requires no set theory. A somewhat more serious point along these lines is made in some famous papers by Pour-El and Richards.
Is this a good way to think about set theory? Of course not. But likewise, the standard reduction to set theory does not illuminate differential equations. Boo set theory!
Like I suspected, this is rife with confusion-of-levels.
That’s like saying that you can get through life without knowing about atoms more easily than you can without knowing about animals, and so biology must be more fundamental than physics. Completely the wrong sense of the word “fundamental”.
This is a classic confusion of levels. It’s the same mistake Eliezer makes when he allows himself to talk about “seeing” cardinal numbers, and when people say that special relativity disproves Euclidean geometry, or that quantum mechanics disproves classical logic.
Your conception of “differential equations” is probably too narrow for this to be true. Consider where set theory came from: Cantor was studying Fourier series, which are important in differential equations.
...and nor does the reduction of biology to physics “illuminate” human behavior. That just isn’t the point!
Nope. It is literally possible to reduce the theory of Turing machines to real analytic ODEs. These can be modeled without set theory.
Okay, that sounds interesting (reference?), but what about the rest of my comment?
Here is Pour-El and Richards. Here is a more recent reference that makes my claim more explicitly. Both are gated.
I’m not sure what to say. You’ve accused me of “confusing levels,” but I’m exactly disputing the idea that sets are at a lower level than real numbers. Maybe I know how to address this:
I don’t know about human behavior, which isn’t much illuminated by any subject at all. But the reduction of biology to physics absolutely does illuminate biology. Here’s Feynman in six easy pieces:
You simply can’t say the same thing—even hyperbolically—about the set-theoretic idea that everything in math is a set, made up of other sets.
Matiyasevich’s book “Hilbert’s 10th Problem” sketches out one way to do this.
Hilbert’s 10th problem is about polynomial equations in integer numbers. This is a vastly different thing.
Yes, Hilbert’s 10th Problem was whether there was an algorithm for solving whether a given Diophantine equation has solutions over the integers. The answer turned out to be “no” and the proof (which took many years) in some sense amounted to showing that one could for any Turing machine and starting tape make a Diophantine equation that has a solution iff the Turing machine halts in an accepting state. Some of the results and techniques for doing that can be used to show that other classes of problems can model Turing machines, and that’s the context that Matiyasevich discusses it.
What signature do we need for it? Because in the first-order theory of real numbers without sets you cannot express functions or sequences.
For example, full theory of everything expressible about real numbers using “+, *, =, 0, 1, >” can be reolved algorithmically.
I’m not sure, presumably to “+*=01>” one adds a bunch of special functions. The “o-minimal approach” to differential geometry requires no sequences. Functions are encodable as definable graphs, so are vector fields.
As you note, for completeness reasons an actual o-minimal theory is not as strong as set theory; one has to smuggle in e.g. natural numbers somehow, maybe with sin(x). I could have made a less tendentious point with Godel numbering.
Again, this is meant as a kind of joke, not as a natural way of looking at sets. The point is that I don’t regard von Neumann’s {{},{{}},{{},{{}}}} as a natural way of looking at the number three, either.
Once you say that functions are definable graphs, you are on a slippery slope. If you want to prove something about “all functions”, you have to be able to quantify over all formulas. This means you have already smuggled natural numbers into the model without defining their properties well...
When you consider a usual theory, you are only interested in the formulas as long as you can write—not so here, if you want to say something about all expressible functions.
And studying (among other things) effects of smuggling natural numbers used to count symbols in formulas into the theory is one of the easy-to-reach interesting things in set theory.
About natural numbers—direct set representation is quite unnatural; underlying idea of well-ordered set is just an expression of the idea that natural numbers are the numbers we can use for counting.
The true all-mathematical value of set theory is, of course to be a universal measure of weirdness: if your theory can be modelled inside ZFC, you can stop explaining why it has no contradictions.
I’m not sure one is more or less fundamental than the other. It does seem fair to say that as far as differential equations are concerned a completely different foundational setting wouldn’t make any difference. So it isn’t analogous to reductionism in that the behavior isn’t brought about by the local interaction of pieces under the hood.
Really? You don’t think the demonstrable reducibility of other branches of mathematics to set theory means anything?
This is actually a vacuous statement, because if it did make a difference, you wouldn’t call it “a completely different foundational setting” of the same subject. Similarly, it wouldn’t “make any difference” if, hypothetically, a 747 (as we understand it in terms of high-level properties) turned out to be made of something other than atoms; because by assumption the high-level properties of the thing we’re reducing are fixed.
The important point is whether something can be reduced, not whether it must be.
I really don’t know enough about programming to make a properly impressive analogy, but set theory is like a lower-level language or operating system on top of which other branches can be made to run.
I think the right analogy is not to building 747s out of parts, but to telling stories in different languages. The plot of “3 little pigs” has nothing to do with the English language, and the plot of Wiles’s proof of Fermat’s last theorem has nothing to do with set theory.
Not in any strong sense, no. I reduce things to other fundamental frameworks also. One could for example choose categories to be one’s fundamental objects and do pretty well. To extend your 747 analogy, this is closer to if we had two different 747s, one made of atoms and the other made from the four classical elements, and somehow for any 747 you could once it was assembled to decide to disassemble it into either atoms or earth, air, fire and water.
Well, we can disassemble every planet orbit to epicycles. Does that mean that our astronomical knowledge based on Newton’s mechanics is worthless?
Epicycles are only a rough approximation that doesn’t work very well, and it doesn’t in any way give you Kepler’s third law (that there’s a relationship between the orbits). I’m also confused in that even if that were the case it wouldn’t make or Kepler or Newton’s mechanics worthless. What point are you trying to make?
Obviously Kepler’s astronomical model is superior and that line might have been rhetorical flourish but the Ptolemaic, Copernican and Tychonic models were by no means “rough approximations” that don’t “work very well”. Epicycles worked very well which is part of why it took so long to get rid of them- the deviations of theory from actual planetary paths were so small that they were only detectable over long periods of time or unprecedented observational accuracy (before Brahe).
(I don’t understand the grandparent’s point either and agree that mathematical reduction to set theory is a different sort of thing from physical reduction to quantum field theory—just pointing this one thing out.)
Yes, by doesn’t work very well, I mean more “doesn’t work very well when you have really good data.” I should have been more clear.
Epicycles can give you arbitrary precision if you use enough of them… It is quite similar to Fourier transform.
My point is that in most cases you can disassemble a 767 into various colections of parts.