There are elementary statements about arithmetic that don’t have obvious real-world equivalents, or where the obvious physical implication is false.
Suppose I tell you, for instance, that there’s a real number that when multiplied by itself equals 2. There’s a natural geometric interpretation of this in terms of ratios; it says that given two segments, A and B, there’s a segment C such that A:C is the same ratio as C:B.
But it could very easily be the case that this is not true in the actual geometry of the universe around us; measurements are noisy, space is curved, space isn’t quite continuous, etc. But the math is unobjectionable and exact.
So that suggests that at least some elementary math can be, and has to be, justified in some way other than “true for the universe around us.”
A thought (and I might just be being crazy here): if we think of mathematics as a specific case of analogical reasoning a la Hofstadter or Gentner it seems that we could think of mathematics as layered analogies.
More concretely; geometry, arithmetic and algebra have obvious physical analogues and seem to have been derived by generalizing some sorts of action protocols. Basic algebra allows one to generalize about which transactions are beneficial, geometry allows one to generalize about relative sizes of things and, well, a lot of more complicated sorts of things like architecture.
Mathematics can be thought of as a sort of protocol logic. We use protocols to reason about protocols, and so we can devise a protocol logic for types of protocol logics. This seems to be what many more abstract areas of mathematics really are. They reason analogically from other domains of mathematics, borrowing similar tricks, and apply them to thinking about other parts of mathematics. In this way mathematics acts as its own subject matter and builds on itself recursively.
Take mathematical logic (from an historical perspective) for example. Mathematical logicians look at what mathematicians actually do, they take the black box “doing math” and devise a rule set that captures it; they search for a representative protocol. N logicians could devise N hypotheses and see where the hypotheses diverge from the black box (‘inconsistent!’ one may shout, ‘underpowered, cannot prove this known result!’ yet another might say). Like any other endeavor, we cannot expect that we have hit the correct hypothesis, and indeed new set theories and logics are still being toyed with today.
Just take Ross Brady’s work on universal logic. He devised an alternative logic in which to build a set theory that allowed for an unrestricted axiom of comprehension, nearly one hundred years after Russell’s paradox.
It seems to me that ultimately a mathematical logician should desire to obtain a mechanical understanding of mathematics; the task of building a machine that can create new mathematics (as opposed to simple searching the space of known mathematics, or simpler still the space of known analytic functions) requires this understanding.
I expect a machine to take its input data, and arrange expected changes into some sort of logical protocols so that it can compute counterfactuals. I expect that recurrent protocols of this sort should be cached and consolidated by some process, which seems very hard to actually define algorithmically.
This actually makes quite a bit of sense (to me, of course) in terms of outcomes, it would explain why mathematics is so applicable; it is all about analogical reasoning and reasoning about certain types of protocols.
So, am I crazy? Did that spiel make any damned sense?
Just take Ross Brady’s work on universal logic. He devised an alternative logic in which to build a set theory that allowed for an unrestricted axiom of comprehension, nearly one hundred years after Russell’s paradox.
I don’t know the book, but here’s a review. Unrestricted comprehension, at the expense of restricted logic, which is an inevitable tradeoff ever since Russell torpedoed Frege’s system. It’s like one of those sliding-block puzzles. However you slide the blocks around, there’s always a hole, and I don’t see much philosophical significance in where the hole gets shifted to.
Yes, I’ve read that review and you’re correct. Probably a bad example. Anyway, my general point was that mathematics is built from concrete subject matter, and mathematics itself, being a neurological phenomenon, is as concrete a subject matter as any other. We take examples from our daily comings and goings and look at the logic (in the colloquial sense) of them to devise mathematics. The activity of doing mathematics itself is one part of those comings and goings, and this seems to me to be the source of many of the seemingly intractable abstractions that make ideas like Platonism so appealing.
You would find Lakoff and Nuñez’s Where Mathematics Comes From interesting. Their thesis is along these lines. I read the first chapter and I got a lot out of it.
It seems that you prefer the second interpretation of mathematical statements. But the first one, that which refers to physical world, isn’t completely unattractive either.
For example I can use multiplication only for calculating areas of rectangles; if so, I would probably hold that “5x3” means “area of a rectangle whose sides measure 5 and 3“, and “there is a real number which multiplied by itself equals two” means “there is a square of area 2”. I would say that 5 times 3 equals 15 if and only if it was true for all real rectangles which I have met, and after seeing a counter-example I would abandon the belief that “5x3=15”.
Or I can mean “if I add together five groups of three apples each, I would find fifteen objects”. If this was my understanding of what the proposition means, a counter-example consisting of a 5x3 rectangle with area of 27 wouldn’t persuade me to abandon the abstract belief, because multiplication is not inherently about rectangles.
In the real world people are familiar with both uses of multiplication and many others, so any counter-example in one area is likely to be perceived as evidence that multiplication isn’t a good model of that process, rather than that we have to update our understanding of multiplication.
Math is exact in the sense that once the rules of inference are given there is no freedom but to follow them, and unobjectionable in the sense that it is futile to dispute the axioms. Any axiomatic system is like that. But most mathematical models we actually used have one advantage over that: they are not arbitrary, but rather designed to be useful to describe plethora of different real-world situations. Removing a single application of multiplication doesn’t shatter the abstract truth, but if you consecutively realised that multiplication does capture neither calculating areas nor putting groups of objects together nor any other physical process, what content would remain in propositions like “5x3=15”? They would be meaningless strings produced by an arbitrary prescription.
For example I can use multiplication only for calculating areas of rectangles; if so, I would probably hold that “5x3” means “area of a rectangle whose sides measure 5 and 3“, and “there is a real number which multiplied by itself equals two” means “there is a square of area 2”.
Or I can mean “if I add together five groups of three apples each, I would find fifteen objects”.
As a quick aside, I think these two interpretations are actually the same thing in disguise. Areas as measurements have units attached to the numbers. Specifically, the units are squares whose sides measure one “unit length”. So when you’re looking at a rectangle that measures 5x3, you’re noting that there are five groups of three squares (or three groups of five squares, depending on how you want to interpret the roles of the factors). Otherwise it’s hard to see why the area would be a result of multiplying the lengths of the sides.
I think perhaps a better example would be the difference between partitive and quotative division. Partitive (“equal-sharing”) says “I have X things to divide equally between N groups. How many things does each group get?” Quotative (“measurement” or “repeated subtraction”) says “I have X things, and I want to make sure that each group gets N of those things. How many groups will there be?” This is the source of not a small amount of confusion for children who are taught only the partitive interpretation and are given a jumble of partitive and quotative division word problems. It’s not immediately obvious why these two different ideas would result in the same numerical computation; it’s actually a result of the commutativity of multiplication and the fact that division is inverse multiplication. So there’s a deep structure here that’s invisible even to participants that still guides their activities and understanding.
Math is exact in the sense that once the rules of inference are given there is no freedom but to follow them, and unobjectionable in the sense that it is futile to dispute the axioms. Any axiomatic system is like that.
I agree that axiomatic systems are like that, but I don’t think the essence of math is axiomatic. That’s one method by which people explore mathematics. But there are others, and they dominate at least as much as the axiomatic method.
For instance, Walter Rudin’s book Real and Complex Analysis goes through a marvelously clean and well-organized axiomatic-style exposé of measure theory and Lebesgue integration. But I remember struggling with several of my classmates while going through that class trying to make sense of what is “really going on”. If math were just axiomatic, there wouldn’t be anything left to ask once we had recognized that the proofs really do prove the theorems in question. But there’s still a sense of there being something left to understand, and it certainly seems to go beyond matters of classification.
What finally made it all “click” for me was Henri Lebesgue’s own description of his integral. I can’t seem to find the original quote, but in short he provided an analogy of being a shopkeeper counting your revenue at the end of the day. One way, akin to the Riemann integral, is to count the money in the order in which it was received and add it up as you go. The second, akin to Lebesgue integration, is to sort the money by value - $1 bills, $5 bills, etc. - and then count how many are in each pile (i.e. the measure of the piles). This suddenly made everything we were doing make tremendously more sense to me; for instance, I could see how the proofs were conceived, even though my insight didn’t actually change anything about how I perceived the axiomatic logic of the proofs.
The fact that some people saw this without Lebesgue’s analogy is beside the point. The point is that there’s an extra something that seems to need to be added in order to feel like the material is understood.
I’m going to some lengths to point this out because the idea of math as perfect and axiomatic just isn’t the mathematics that humans practice or know. It can look that way, but the truth seems to be more complicated than that.
I think perhaps a better example would be the difference between partitive and quotative division.
Maybe even easier example is the commutativity of multiplication itself. It is not a priori clear that 5 group of 3 objects each are the same as 3 groups of 5 objects each. When I was a child I was feeling confused why addition and multiplication are commutative while exponentiation isn’t.
I’m going to some lengths to point this out because the idea of math as perfect and axiomatic just isn’t the mathematics that humans practice or know. It can look that way, but the truth seems to be more complicated than that.
Yes, we have powerful (sometimes astoundingly powerful, as in case of Ramanujan) intuitions built in our brains that allow us to do high-level operations. Mathematics is practically never done on the lowest level of formal manipulation. There is certainly large difference between mathematics as an axiomatic system and the art of mathematics as a human endeavour—if there weren’t, mathematicians were replaced by machines long ago. But that doesn’t seem much relevant to the question of truth of mathematical theorems. Whatever intuitive thought had lead to its discovery, people will agree that it is valid iff there is a formal proof.
Maybe even easier example is the commutativity of multiplication itself.
That’s a good point! I avoided that example because there’s a pretty easy and convincing “proof” of the commutativity of multiplication, namely that turning a rectangle on its side doesn’t change how many things constitute it So, it doesn’t matter whether you count how many are in each row and then count how many rows there are, or if you do that with columns instead.
I think it’s terribly sad that they don’t encourage children to notice that or something like it. But there are a lot of things about education I find terribly sad and that I’m doing my damnest to fix.
But that doesn’t seem much relevant to the question of truth of mathematical theorems. Whatever intuitive thought had lead to its discovery, people will agree that it is valid iff there is a formal proof.
Agreed, though there’s no objective definition of what constitutes a “formal proof”. Despite what it might seem like from the outside, there’s no one axiomatic system and deductive set of rules to which all subfields of mathematics pay homage.
In the actual physical world we live in, statements like “there is a square of area two” might not be exactly true. There certainly is no evidence that they are exactly true. (Whereas, in contrast, two plus two really gives you exactly four bananas.)
It’s certainly true that much human-developed math was developed to serve practical purposes, and therefore does accurately model aspects of the real world. But the math isn’t made less true because the physical world deviates slightly from it; likewise the math that’s less tied to the physical world isn’t less true. There are lots and lots of theorems that are interesting, and even useful, but that don’t seem to have much to do with anything physical. (E.g., number theory, or abstract algebra.)
We should maybe taboo the word “true”, since for a mathematical theorem to be true is not exactly the same as for an interpreted sentence about the physical world. How would you then formulate the sentence “the math that’s less tied to the physical world isn’t less true”?
In this case, I mean something like “if you start off with consistent and true beliefs, adding more true beliefs won’t lead to self contradiction.” I can define self-contradiction formally, as asserting both a statement and its formal negation.
This may seem slightly circular, but I think it’s still a useful definition that captures what I want. I also think some circularity is useful to capture what we mean by an axiomatic system.
There are elementary statements about arithmetic that don’t have obvious real-world equivalents, or where the obvious physical implication is false.
Suppose I tell you, for instance, that there’s a real number that when multiplied by itself equals 2. There’s a natural geometric interpretation of this in terms of ratios; it says that given two segments, A and B, there’s a segment C such that A:C is the same ratio as C:B.
But it could very easily be the case that this is not true in the actual geometry of the universe around us; measurements are noisy, space is curved, space isn’t quite continuous, etc. But the math is unobjectionable and exact.
So that suggests that at least some elementary math can be, and has to be, justified in some way other than “true for the universe around us.”
A thought (and I might just be being crazy here): if we think of mathematics as a specific case of analogical reasoning a la Hofstadter or Gentner it seems that we could think of mathematics as layered analogies.
More concretely; geometry, arithmetic and algebra have obvious physical analogues and seem to have been derived by generalizing some sorts of action protocols. Basic algebra allows one to generalize about which transactions are beneficial, geometry allows one to generalize about relative sizes of things and, well, a lot of more complicated sorts of things like architecture.
Mathematics can be thought of as a sort of protocol logic. We use protocols to reason about protocols, and so we can devise a protocol logic for types of protocol logics. This seems to be what many more abstract areas of mathematics really are. They reason analogically from other domains of mathematics, borrowing similar tricks, and apply them to thinking about other parts of mathematics. In this way mathematics acts as its own subject matter and builds on itself recursively.
Take mathematical logic (from an historical perspective) for example. Mathematical logicians look at what mathematicians actually do, they take the black box “doing math” and devise a rule set that captures it; they search for a representative protocol. N logicians could devise N hypotheses and see where the hypotheses diverge from the black box (‘inconsistent!’ one may shout, ‘underpowered, cannot prove this known result!’ yet another might say). Like any other endeavor, we cannot expect that we have hit the correct hypothesis, and indeed new set theories and logics are still being toyed with today.
Just take Ross Brady’s work on universal logic. He devised an alternative logic in which to build a set theory that allowed for an unrestricted axiom of comprehension, nearly one hundred years after Russell’s paradox.
It seems to me that ultimately a mathematical logician should desire to obtain a mechanical understanding of mathematics; the task of building a machine that can create new mathematics (as opposed to simple searching the space of known mathematics, or simpler still the space of known analytic functions) requires this understanding.
I expect a machine to take its input data, and arrange expected changes into some sort of logical protocols so that it can compute counterfactuals. I expect that recurrent protocols of this sort should be cached and consolidated by some process, which seems very hard to actually define algorithmically.
This actually makes quite a bit of sense (to me, of course) in terms of outcomes, it would explain why mathematics is so applicable; it is all about analogical reasoning and reasoning about certain types of protocols.
So, am I crazy? Did that spiel make any damned sense?
I don’t know the book, but here’s a review. Unrestricted comprehension, at the expense of restricted logic, which is an inevitable tradeoff ever since Russell torpedoed Frege’s system. It’s like one of those sliding-block puzzles. However you slide the blocks around, there’s always a hole, and I don’t see much philosophical significance in where the hole gets shifted to.
Yes, I’ve read that review and you’re correct. Probably a bad example. Anyway, my general point was that mathematics is built from concrete subject matter, and mathematics itself, being a neurological phenomenon, is as concrete a subject matter as any other. We take examples from our daily comings and goings and look at the logic (in the colloquial sense) of them to devise mathematics. The activity of doing mathematics itself is one part of those comings and goings, and this seems to me to be the source of many of the seemingly intractable abstractions that make ideas like Platonism so appealing.
Does that seem correct to you?
You would find Lakoff and Nuñez’s Where Mathematics Comes From interesting. Their thesis is along these lines. I read the first chapter and I got a lot out of it.
It seems that you prefer the second interpretation of mathematical statements. But the first one, that which refers to physical world, isn’t completely unattractive either.
For example I can use multiplication only for calculating areas of rectangles; if so, I would probably hold that “5x3” means “area of a rectangle whose sides measure 5 and 3“, and “there is a real number which multiplied by itself equals two” means “there is a square of area 2”. I would say that 5 times 3 equals 15 if and only if it was true for all real rectangles which I have met, and after seeing a counter-example I would abandon the belief that “5x3=15”.
Or I can mean “if I add together five groups of three apples each, I would find fifteen objects”. If this was my understanding of what the proposition means, a counter-example consisting of a 5x3 rectangle with area of 27 wouldn’t persuade me to abandon the abstract belief, because multiplication is not inherently about rectangles.
In the real world people are familiar with both uses of multiplication and many others, so any counter-example in one area is likely to be perceived as evidence that multiplication isn’t a good model of that process, rather than that we have to update our understanding of multiplication.
Math is exact in the sense that once the rules of inference are given there is no freedom but to follow them, and unobjectionable in the sense that it is futile to dispute the axioms. Any axiomatic system is like that. But most mathematical models we actually used have one advantage over that: they are not arbitrary, but rather designed to be useful to describe plethora of different real-world situations. Removing a single application of multiplication doesn’t shatter the abstract truth, but if you consecutively realised that multiplication does capture neither calculating areas nor putting groups of objects together nor any other physical process, what content would remain in propositions like “5x3=15”? They would be meaningless strings produced by an arbitrary prescription.
As a quick aside, I think these two interpretations are actually the same thing in disguise. Areas as measurements have units attached to the numbers. Specifically, the units are squares whose sides measure one “unit length”. So when you’re looking at a rectangle that measures 5x3, you’re noting that there are five groups of three squares (or three groups of five squares, depending on how you want to interpret the roles of the factors). Otherwise it’s hard to see why the area would be a result of multiplying the lengths of the sides.
I think perhaps a better example would be the difference between partitive and quotative division. Partitive (“equal-sharing”) says “I have X things to divide equally between N groups. How many things does each group get?” Quotative (“measurement” or “repeated subtraction”) says “I have X things, and I want to make sure that each group gets N of those things. How many groups will there be?” This is the source of not a small amount of confusion for children who are taught only the partitive interpretation and are given a jumble of partitive and quotative division word problems. It’s not immediately obvious why these two different ideas would result in the same numerical computation; it’s actually a result of the commutativity of multiplication and the fact that division is inverse multiplication. So there’s a deep structure here that’s invisible even to participants that still guides their activities and understanding.
I agree that axiomatic systems are like that, but I don’t think the essence of math is axiomatic. That’s one method by which people explore mathematics. But there are others, and they dominate at least as much as the axiomatic method.
For instance, Walter Rudin’s book Real and Complex Analysis goes through a marvelously clean and well-organized axiomatic-style exposé of measure theory and Lebesgue integration. But I remember struggling with several of my classmates while going through that class trying to make sense of what is “really going on”. If math were just axiomatic, there wouldn’t be anything left to ask once we had recognized that the proofs really do prove the theorems in question. But there’s still a sense of there being something left to understand, and it certainly seems to go beyond matters of classification.
What finally made it all “click” for me was Henri Lebesgue’s own description of his integral. I can’t seem to find the original quote, but in short he provided an analogy of being a shopkeeper counting your revenue at the end of the day. One way, akin to the Riemann integral, is to count the money in the order in which it was received and add it up as you go. The second, akin to Lebesgue integration, is to sort the money by value - $1 bills, $5 bills, etc. - and then count how many are in each pile (i.e. the measure of the piles). This suddenly made everything we were doing make tremendously more sense to me; for instance, I could see how the proofs were conceived, even though my insight didn’t actually change anything about how I perceived the axiomatic logic of the proofs.
The fact that some people saw this without Lebesgue’s analogy is beside the point. The point is that there’s an extra something that seems to need to be added in order to feel like the material is understood.
I’m going to some lengths to point this out because the idea of math as perfect and axiomatic just isn’t the mathematics that humans practice or know. It can look that way, but the truth seems to be more complicated than that.
Maybe even easier example is the commutativity of multiplication itself. It is not a priori clear that 5 group of 3 objects each are the same as 3 groups of 5 objects each. When I was a child I was feeling confused why addition and multiplication are commutative while exponentiation isn’t.
Yes, we have powerful (sometimes astoundingly powerful, as in case of Ramanujan) intuitions built in our brains that allow us to do high-level operations. Mathematics is practically never done on the lowest level of formal manipulation. There is certainly large difference between mathematics as an axiomatic system and the art of mathematics as a human endeavour—if there weren’t, mathematicians were replaced by machines long ago. But that doesn’t seem much relevant to the question of truth of mathematical theorems. Whatever intuitive thought had lead to its discovery, people will agree that it is valid iff there is a formal proof.
That’s a good point! I avoided that example because there’s a pretty easy and convincing “proof” of the commutativity of multiplication, namely that turning a rectangle on its side doesn’t change how many things constitute it So, it doesn’t matter whether you count how many are in each row and then count how many rows there are, or if you do that with columns instead.
I think it’s terribly sad that they don’t encourage children to notice that or something like it. But there are a lot of things about education I find terribly sad and that I’m doing my damnest to fix.
Agreed, though there’s no objective definition of what constitutes a “formal proof”. Despite what it might seem like from the outside, there’s no one axiomatic system and deductive set of rules to which all subfields of mathematics pay homage.
In the actual physical world we live in, statements like “there is a square of area two” might not be exactly true. There certainly is no evidence that they are exactly true. (Whereas, in contrast, two plus two really gives you exactly four bananas.)
It’s certainly true that much human-developed math was developed to serve practical purposes, and therefore does accurately model aspects of the real world. But the math isn’t made less true because the physical world deviates slightly from it; likewise the math that’s less tied to the physical world isn’t less true. There are lots and lots of theorems that are interesting, and even useful, but that don’t seem to have much to do with anything physical. (E.g., number theory, or abstract algebra.)
We should maybe taboo the word “true”, since for a mathematical theorem to be true is not exactly the same as for an interpreted sentence about the physical world. How would you then formulate the sentence “the math that’s less tied to the physical world isn’t less true”?
In this case, I mean something like “if you start off with consistent and true beliefs, adding more true beliefs won’t lead to self contradiction.” I can define self-contradiction formally, as asserting both a statement and its formal negation.
This may seem slightly circular, but I think it’s still a useful definition that captures what I want. I also think some circularity is useful to capture what we mean by an axiomatic system.