A thought (and I might just be being crazy here): if we think of mathematics as a specific case of analogical reasoning a la Hofstadter or Gentner it seems that we could think of mathematics as layered analogies.
More concretely; geometry, arithmetic and algebra have obvious physical analogues and seem to have been derived by generalizing some sorts of action protocols. Basic algebra allows one to generalize about which transactions are beneficial, geometry allows one to generalize about relative sizes of things and, well, a lot of more complicated sorts of things like architecture.
Mathematics can be thought of as a sort of protocol logic. We use protocols to reason about protocols, and so we can devise a protocol logic for types of protocol logics. This seems to be what many more abstract areas of mathematics really are. They reason analogically from other domains of mathematics, borrowing similar tricks, and apply them to thinking about other parts of mathematics. In this way mathematics acts as its own subject matter and builds on itself recursively.
Take mathematical logic (from an historical perspective) for example. Mathematical logicians look at what mathematicians actually do, they take the black box “doing math” and devise a rule set that captures it; they search for a representative protocol. N logicians could devise N hypotheses and see where the hypotheses diverge from the black box (‘inconsistent!’ one may shout, ‘underpowered, cannot prove this known result!’ yet another might say). Like any other endeavor, we cannot expect that we have hit the correct hypothesis, and indeed new set theories and logics are still being toyed with today.
Just take Ross Brady’s work on universal logic. He devised an alternative logic in which to build a set theory that allowed for an unrestricted axiom of comprehension, nearly one hundred years after Russell’s paradox.
It seems to me that ultimately a mathematical logician should desire to obtain a mechanical understanding of mathematics; the task of building a machine that can create new mathematics (as opposed to simple searching the space of known mathematics, or simpler still the space of known analytic functions) requires this understanding.
I expect a machine to take its input data, and arrange expected changes into some sort of logical protocols so that it can compute counterfactuals. I expect that recurrent protocols of this sort should be cached and consolidated by some process, which seems very hard to actually define algorithmically.
This actually makes quite a bit of sense (to me, of course) in terms of outcomes, it would explain why mathematics is so applicable; it is all about analogical reasoning and reasoning about certain types of protocols.
So, am I crazy? Did that spiel make any damned sense?
Just take Ross Brady’s work on universal logic. He devised an alternative logic in which to build a set theory that allowed for an unrestricted axiom of comprehension, nearly one hundred years after Russell’s paradox.
I don’t know the book, but here’s a review. Unrestricted comprehension, at the expense of restricted logic, which is an inevitable tradeoff ever since Russell torpedoed Frege’s system. It’s like one of those sliding-block puzzles. However you slide the blocks around, there’s always a hole, and I don’t see much philosophical significance in where the hole gets shifted to.
Yes, I’ve read that review and you’re correct. Probably a bad example. Anyway, my general point was that mathematics is built from concrete subject matter, and mathematics itself, being a neurological phenomenon, is as concrete a subject matter as any other. We take examples from our daily comings and goings and look at the logic (in the colloquial sense) of them to devise mathematics. The activity of doing mathematics itself is one part of those comings and goings, and this seems to me to be the source of many of the seemingly intractable abstractions that make ideas like Platonism so appealing.
You would find Lakoff and Nuñez’s Where Mathematics Comes From interesting. Their thesis is along these lines. I read the first chapter and I got a lot out of it.
A thought (and I might just be being crazy here): if we think of mathematics as a specific case of analogical reasoning a la Hofstadter or Gentner it seems that we could think of mathematics as layered analogies.
More concretely; geometry, arithmetic and algebra have obvious physical analogues and seem to have been derived by generalizing some sorts of action protocols. Basic algebra allows one to generalize about which transactions are beneficial, geometry allows one to generalize about relative sizes of things and, well, a lot of more complicated sorts of things like architecture.
Mathematics can be thought of as a sort of protocol logic. We use protocols to reason about protocols, and so we can devise a protocol logic for types of protocol logics. This seems to be what many more abstract areas of mathematics really are. They reason analogically from other domains of mathematics, borrowing similar tricks, and apply them to thinking about other parts of mathematics. In this way mathematics acts as its own subject matter and builds on itself recursively.
Take mathematical logic (from an historical perspective) for example. Mathematical logicians look at what mathematicians actually do, they take the black box “doing math” and devise a rule set that captures it; they search for a representative protocol. N logicians could devise N hypotheses and see where the hypotheses diverge from the black box (‘inconsistent!’ one may shout, ‘underpowered, cannot prove this known result!’ yet another might say). Like any other endeavor, we cannot expect that we have hit the correct hypothesis, and indeed new set theories and logics are still being toyed with today.
Just take Ross Brady’s work on universal logic. He devised an alternative logic in which to build a set theory that allowed for an unrestricted axiom of comprehension, nearly one hundred years after Russell’s paradox.
It seems to me that ultimately a mathematical logician should desire to obtain a mechanical understanding of mathematics; the task of building a machine that can create new mathematics (as opposed to simple searching the space of known mathematics, or simpler still the space of known analytic functions) requires this understanding.
I expect a machine to take its input data, and arrange expected changes into some sort of logical protocols so that it can compute counterfactuals. I expect that recurrent protocols of this sort should be cached and consolidated by some process, which seems very hard to actually define algorithmically.
This actually makes quite a bit of sense (to me, of course) in terms of outcomes, it would explain why mathematics is so applicable; it is all about analogical reasoning and reasoning about certain types of protocols.
So, am I crazy? Did that spiel make any damned sense?
I don’t know the book, but here’s a review. Unrestricted comprehension, at the expense of restricted logic, which is an inevitable tradeoff ever since Russell torpedoed Frege’s system. It’s like one of those sliding-block puzzles. However you slide the blocks around, there’s always a hole, and I don’t see much philosophical significance in where the hole gets shifted to.
Yes, I’ve read that review and you’re correct. Probably a bad example. Anyway, my general point was that mathematics is built from concrete subject matter, and mathematics itself, being a neurological phenomenon, is as concrete a subject matter as any other. We take examples from our daily comings and goings and look at the logic (in the colloquial sense) of them to devise mathematics. The activity of doing mathematics itself is one part of those comings and goings, and this seems to me to be the source of many of the seemingly intractable abstractions that make ideas like Platonism so appealing.
Does that seem correct to you?
You would find Lakoff and Nuñez’s Where Mathematics Comes From interesting. Their thesis is along these lines. I read the first chapter and I got a lot out of it.