I disagree with this particular line, though I don’t think it messes up your general point here (if anything it strengthens it):
The story goes like this: in the beginning, Leibniz and Newton developed calculus using infinitesimals, which were intuitive but had no rigorous foundation (which is to say, ad-hoc).
Part of the point of the post is that ad-hoc-ness is not actually about the presence or absence of rigorous mathematical foundations; it’s about how well the mathematical formulas we’re using match our intuitive concepts. It’s the correspondence to intuitive concepts which tells us how much we should expect the math to generalize to new cases which our intuition says the concept should generalize to. The “arguments” which we want to uniquely specify our formulas are not derivations or proofs from ZFC, they’re intuitive justifications for why we’re choosing these particular definitions.
So I’d actually say that infinitesimals were less ad-hoc, at least at first, than epsilon-delta calculus.
This also highlights an interesting point: ad-hoc-ness and rigorous proofs are orthogonal. It’s possible to have the right formulas for our intuitive concepts before we know exactly what rules and proofs will make it fully rigorous.
Highlighting the difference between ad-hoc-ness and rigor was what I was trying to do when I emphasized that element, though I shoulda put the parentheses between the intuition and rigor section. The implicit assumption I made, which I should probably make explicit, is that if we have something which matches our intuitive concepts well and has a rigorous foundation then I expect it to dominate other options (both in terms of effectiveness and popularity).
Fleshing out the assumption a bit: if you made a 2x2 graph with ad-hoc as the x axis and rigor as the y axis, the upper right quadrant is the good stuff we use all the time; the upper left quadrant is true-but-useless, the bottom left quadrant is ignored completely, and the bottom right quadrant of high ad-hoc but low rigor is where all the action is (in the sense of definitions that might be really useful and adopted in the future).
The infinitesimal vs limits case seems like an example: good intuition match and poor rigor was replaced with acceptable intuition match and good rigor. However, it is a bit messy—I’m zeroing in on the infinitesimals vs limits as methods rather than definitions per se, or something like the presentation of the fundamental theorem of calculus.
I quite separately took the liberty of assuming the same logic you are applying to definitions could be applied to the rest of mathematical architecture, like methods, algorithms, notation, and so on. I admit this introduces quite a bit of fuzz.
This is a great comment.
I disagree with this particular line, though I don’t think it messes up your general point here (if anything it strengthens it):
Part of the point of the post is that ad-hoc-ness is not actually about the presence or absence of rigorous mathematical foundations; it’s about how well the mathematical formulas we’re using match our intuitive concepts. It’s the correspondence to intuitive concepts which tells us how much we should expect the math to generalize to new cases which our intuition says the concept should generalize to. The “arguments” which we want to uniquely specify our formulas are not derivations or proofs from ZFC, they’re intuitive justifications for why we’re choosing these particular definitions.
So I’d actually say that infinitesimals were less ad-hoc, at least at first, than epsilon-delta calculus.
This also highlights an interesting point: ad-hoc-ness and rigorous proofs are orthogonal. It’s possible to have the right formulas for our intuitive concepts before we know exactly what rules and proofs will make it fully rigorous.
Highlighting the difference between ad-hoc-ness and rigor was what I was trying to do when I emphasized that element, though I shoulda put the parentheses between the intuition and rigor section. The implicit assumption I made, which I should probably make explicit, is that if we have something which matches our intuitive concepts well and has a rigorous foundation then I expect it to dominate other options (both in terms of effectiveness and popularity).
Fleshing out the assumption a bit: if you made a 2x2 graph with ad-hoc as the x axis and rigor as the y axis, the upper right quadrant is the good stuff we use all the time; the upper left quadrant is true-but-useless, the bottom left quadrant is ignored completely, and the bottom right quadrant of high ad-hoc but low rigor is where all the action is (in the sense of definitions that might be really useful and adopted in the future).
The infinitesimal vs limits case seems like an example: good intuition match and poor rigor was replaced with acceptable intuition match and good rigor. However, it is a bit messy—I’m zeroing in on the infinitesimals vs limits as methods rather than definitions per se, or something like the presentation of the fundamental theorem of calculus.
I quite separately took the liberty of assuming the same logic you are applying to definitions could be applied to the rest of mathematical architecture, like methods, algorithms, notation, and so on. I admit this introduces quite a bit of fuzz.