I don’t completely agree with your characterisation “[math] seems to have gotten it impossibly right the first time around” of how we got the current abstractions in mathematics. Taking your example of analysis, 1) Leibniz and Newton put forward different ideas about what the operation of taking a derivative meant, with different notations 2) there was a debate over (two) centuries before the current abstractions were settled on (the ones that are taught in undergraduate calculus) 3) in the 60s famously “non-standard analysis” was developed, to give an example of a radical departure, but it hasn’t really caught on.
Still within analysis, I would point out that it’s common(-ish?) to teach two theories of integration in undergraduate math: Riemann and Lebesgue. Riemann integration is the more intuitive “area of thin rectangles under the curve” and is taught first. However, the Lebesgue integral has better theoretical properties which is useful in, for example, differential equations. And beyond undergraduate, there are conceptions of limits in topology and category theory also.
Overall, I’d agree that the rate of trying out new abstractions seems to be lower in mathematics than programming, but as another commenter pointed out, it’s also much older.
A second point is that the relevant distinction may be teaching mathematics vs research mathematics. It seems to me that a lot more theories are tried out on newer mathematics in topics of active research than in teaching the unwashed hordes of non-math-major students.
However, my problem isn’t that in the literal sense new theories don’t exist, my issue is that old theories are so calcified that one can’t really do without knowing them.
E.g. if I as a programmer said “Fuck this C nonsense, it’s useless in the modern world, maybe some hermits in an Intel lab need to know it, but I can do just fine by using PHP” then they can become Mark Zuckerberg. I don’t mean that in the “become rich as *** sense” but in the “become the technical lead of a team developing one of the most complex software products in the world” sense.
Or, if someone doesn’t say “fuck C” but says “C seems to complex, I’m going to start with something else” then they can do that and after 5 years of coding in high level languages they have acquired a set of skills that allowed them to dig back down and learn C very quickly.
And you can replace C with any “old” abstraction that people still consider to be useful and PHP with any new abstraction that makes things easier but is arguably more limited in various key areas (Also, I wouldn’t even claim PHP is easier than C, PHP is a horrible mess and C is beautiful by comparison, but I think the general consensus is against me here, so I’m giving it as an example).
In mathematics this does not seem to be an option, there’s no 2nd year psychology major that decided to take a very simple mathematical abstraction to it’s limits and became the technical leader of one of the most elite teams of mathematicians in the world. Even the mere idea of that happening seems silly.
I don’t know why that is, maybe it’s because, again, math is just harder and there’s not 3-month crash course that will basically give you mastery of a huge area of mathematics the same way a 3-month crash course in PHP will give you the tools needed to build proto-facebook (or any other piece of software that defines a communication and information interpretation & rendering protocol between multiple computers).
Mathematics doesn’t have useful abstractions that allow the user to be blind to the lower level abstractions, nonstandard analysis exists but good luck trying to learn it if you don’t know a more kosher version of analysis already, you can’t start at nonstandard analysis… or maybe you can ? But then that means this is a very under-exploited idea and it gets back to the point I was making.
I’m using programming as the bar here since it seems that, from the 40s onward, the requirements to be a good programmer has been severely lowered due to the new abstraction we introduce. In the 40s you had to be a genius to even understand the idea of computer. In modern times you can be a kinda smart but otherwise unimpressive person and create revolutionary software or write an amazing language of library. Somehow, even though the field got more complex, the entry cost went from 20+ years including the study of mathematics, electrical engineering and formal logic to a 3-month bootcamp or like… reading 3 books online. In mathematics it seems that the entry cost gets higher as time progresses and any attempts to lower that are just tiny corrections or simplifications of existing theory.
And lastly, I don’t know if there’s a process “harming” math’s complexity that could easily be stopped, but there are obvious processes harming programming’s complexity that seems, at least in principle, stopable. E.g. if you look at things like coroutines vs threads vs processes, which get thought as separate abstractions, yet are basically the same **** thing if you move to all but a few kernels that have some niche ideas about asyncio and memory sharing.
That is to say, I can see a language that says “Screw coroutines vs threads vs processes nonsense, we’ll try to auto-detect the best abstraction that the kernel+CPU combination you have supports for this, maybe with some input from the user, and go from there” (I think, at least in part, Go has tried this, but in a very bad fashion, and at least in principle you could write a JVM + JVM language that does this, but the current JVM languages and implementations wouldn’t allow for this).
But if that language never comes, and every single programmers learn to think in terms of those 3 different parallelism abstractions and their off-shots, then we’ve just added some arguably-pointless complexity, that makes sense for our day and age but could well become pointless in a better-designed future.
And at some point you’re bound to be stuck with things like that and increase the entry cost, though hopefully other abstractions are simplified to lower it and the equilibrium keeps staying at a pretty low number of hours.
(Also, I wouldn’t even claim PHP is easier than C, PHP is a horrible mess and C is beautiful by comparison, but I think the general consensus is against me here, so I’m giving it as an example).
The consensus isn’t against you here. PHP consistently ranks as one of the most hated programming languages in general use. I’ve seen multiple surveys.
You make this comparison between programmers and mathematicians, but perhaps the more apt analogy is programming language designers vs mathematicians and programmers vs engineers/scientists? I would say that most engineers and scientists learn a couple of mathematical models in class and then go off and do stuff in R or Matlab. What the average engineer/scientist can model presently is now far greater than even the very best could model in the past. And they don’t need to know which of the 11 methods of approximation is going on under the hood of the program.
Then the different abstractions are things like ODE models, finite element analysis, dynamical systems (eg stability), monte carlo, eigenvalue analysis, graph theory stuff, statistical significance tests, etc
I don’t completely agree with your characterisation “[math] seems to have gotten it impossibly right the first time around” of how we got the current abstractions in mathematics. Taking your example of analysis, 1) Leibniz and Newton put forward different ideas about what the operation of taking a derivative meant, with different notations 2) there was a debate over (two) centuries before the current abstractions were settled on (the ones that are taught in undergraduate calculus) 3) in the 60s famously “non-standard analysis” was developed, to give an example of a radical departure, but it hasn’t really caught on.
Still within analysis, I would point out that it’s common(-ish?) to teach two theories of integration in undergraduate math: Riemann and Lebesgue. Riemann integration is the more intuitive “area of thin rectangles under the curve” and is taught first. However, the Lebesgue integral has better theoretical properties which is useful in, for example, differential equations. And beyond undergraduate, there are conceptions of limits in topology and category theory also.
Overall, I’d agree that the rate of trying out new abstractions seems to be lower in mathematics than programming, but as another commenter pointed out, it’s also much older.
A second point is that the relevant distinction may be teaching mathematics vs research mathematics. It seems to me that a lot more theories are tried out on newer mathematics in topics of active research than in teaching the unwashed hordes of non-math-major students.
I mean, I basically agree with this criticism.
However, my problem isn’t that in the literal sense new theories don’t exist, my issue is that old theories are so calcified that one can’t really do without knowing them.
E.g. if I as a programmer said “Fuck this C nonsense, it’s useless in the modern world, maybe some hermits in an Intel lab need to know it, but I can do just fine by using PHP” then they can become Mark Zuckerberg. I don’t mean that in the “become rich as *** sense” but in the “become the technical lead of a team developing one of the most complex software products in the world” sense.
Or, if someone doesn’t say “fuck C” but says “C seems to complex, I’m going to start with something else” then they can do that and after 5 years of coding in high level languages they have acquired a set of skills that allowed them to dig back down and learn C very quickly.
And you can replace
C
with any “old” abstraction that people still consider to be useful andPHP
with any new abstraction that makes things easier but is arguably more limited in various key areas (Also, I wouldn’t even claim PHP is easier than C, PHP is a horrible mess and C is beautiful by comparison, but I think the general consensus is against me here, so I’m giving it as an example).In mathematics this does not seem to be an option, there’s no 2nd year psychology major that decided to take a very simple mathematical abstraction to it’s limits and became the technical leader of one of the most elite teams of mathematicians in the world. Even the mere idea of that happening seems silly.
I don’t know why that is, maybe it’s because, again, math is just harder and there’s not 3-month crash course that will basically give you mastery of a huge area of mathematics the same way a 3-month crash course in PHP will give you the tools needed to build proto-facebook (or any other piece of software that defines a communication and information interpretation & rendering protocol between multiple computers).
Mathematics doesn’t have useful abstractions that allow the user to be blind to the lower level abstractions, nonstandard analysis exists but good luck trying to learn it if you don’t know a more kosher version of analysis already, you can’t start at nonstandard analysis… or maybe you can ? But then that means this is a very under-exploited idea and it gets back to the point I was making.
I’m using programming as the bar here since it seems that, from the 40s onward, the requirements to be a good programmer has been severely lowered due to the new abstraction we introduce. In the 40s you had to be a genius to even understand the idea of computer. In modern times you can be a kinda smart but otherwise unimpressive person and create revolutionary software or write an amazing language of library. Somehow, even though the field got more complex, the entry cost went from 20+ years including the study of mathematics, electrical engineering and formal logic to a 3-month bootcamp or like… reading 3 books online. In mathematics it seems that the entry cost gets higher as time progresses and any attempts to lower that are just tiny corrections or simplifications of existing theory.
And lastly, I don’t know if there’s a process “harming” math’s complexity that could easily be stopped, but there are obvious processes harming programming’s complexity that seems, at least in principle, stopable. E.g. if you look at things like coroutines vs threads vs processes, which get thought as separate abstractions, yet are basically the same **** thing if you move to all but a few kernels that have some niche ideas about asyncio and memory sharing.
That is to say, I can see a language that says “Screw coroutines vs threads vs processes nonsense, we’ll try to auto-detect the best abstraction that the kernel+CPU combination you have supports for this, maybe with some input from the user, and go from there” (I think, at least in part, Go has tried this, but in a very bad fashion, and at least in principle you could write a JVM + JVM language that does this, but the current JVM languages and implementations wouldn’t allow for this).
But if that language never comes, and every single programmers learn to think in terms of those 3 different parallelism abstractions and their off-shots, then we’ve just added some arguably-pointless complexity, that makes sense for our day and age but could well become pointless in a better-designed future.
And at some point you’re bound to be stuck with things like that and increase the entry cost, though hopefully other abstractions are simplified to lower it and the equilibrium keeps staying at a pretty low number of hours.
The consensus isn’t against you here. PHP consistently ranks as one of the most hated programming languages in general use. I’ve seen multiple surveys.
You make this comparison between programmers and mathematicians, but perhaps the more apt analogy is programming language designers vs mathematicians and programmers vs engineers/scientists? I would say that most engineers and scientists learn a couple of mathematical models in class and then go off and do stuff in R or Matlab. What the average engineer/scientist can model presently is now far greater than even the very best could model in the past. And they don’t need to know which of the 11 methods of approximation is going on under the hood of the program.
Then the different abstractions are things like ODE models, finite element analysis, dynamical systems (eg stability), monte carlo, eigenvalue analysis, graph theory stuff, statistical significance tests, etc