Not demonstrable yet. If a correct theory of quantum gravity is found that turns out not to involve such sophisticated mathematics, then you’ll have a case for this. Right now all you have is the opinion of a few contrarians.
The point is, that’s not a case where we’ve actually found the way ahead, and shown that the mathematical speculation was a dead end. It’s therefore relatively weak evidence.
And given the number of times that further formalizing a field has paid off in the physical sciences, this doesn’t at all convince me that “overmathematizing” is a general problem.
And given the number of times that further formalizing a field has paid off in the physical sciences, this doesn’t at all convince me that “overmathematizing” is a general problem.
In the case of quantum mechanics, the additional formalizing did certainly pay off. But the formalizing was done after the messy first version of the theory made several amazingly accurate predictions.
I thought that the simplifying assumptions, like the independence of mortgage defaults, were the root of the hidden black swan risk in the Gaussian copula. Anyway, the non-theoretical instinct trading of the early 80s led to the Savings and Loan crisis (see e.g. Liar’s Poker for an illustration of the typical attitudes), and I don’t think we have a “safer” candidate theory of financial derivatives to point to.
Sure, it’s not that the idea of the Gaussian Copula itself is wrong. It’s a mathematical theory, the only way it can be wrong is if there’s a flaw in the proof. The problem is that people were overeager to apply the GC as a description of reality. Why? Well, my belief is that it has to do with overmathematization.
Really, it’s odd that there is so much pushback against this idea. To me, it seems like a natural consequence of the Hansonian maxim that “research isn’t about progress”. People want to signal high status and affiliate with other high status folks. One way to do this is the use of gratuitous mathematics. It’s just like those annoying people who use big words to show off.
I still don’t see that you’ve demonstrated overmathematization as a hindering factor.
-Finance gurus used advanced math. -Finance gurus made bad assumptions about the mortgage market.
How have you shown that one caused the other? What method (that you should have presented in your first post instead of dragging this out to at least four) would have led finance gurus to not make bad assumptions, and would have directed them toward less math?
I agree that it’s gotten to the point where academia adheres to standards that don’t actually maximize research progress, and too often try to look impressive at the expense of doing something truly worthwhile. But what alternate epistemology do you propose that could predictably counteract this tendency? I’m still waiting to hear it.
(And the error in assumptions was made by practitioners, where the incentive to produce meaningful results is much stronger, because they actually get a chance to be proven wrong by nature.)
But what alternate epistemology do you propose that could predictably counteract this tendency?
I think the compression principle provides a pretty stark criterion. If a mathematical result can be used to achieve an improved compression rate on a standard empirical dataset, it’s a worthy contribution to the relevant science. If it can’t, then it still might be a good result, but it should be sent to a math journal, not a science journal.
I’m guessing that people tend to think the opposite of over mathematization is hand waving. Perhaps you could talk about inappropriate mathematization. An example would things like the majority of artificial neural networks. Interesting maths and systems to be sure, but a million miles away from actual neurons.
This probably doesn’t quite count, but how about Einstein’s and Szilard’s relatively low esteem among their peers and consequentially low influence largely due to relatively low math ability prior to their major successes (and in Szilard’s case, even after his).
If Szilard had been more influential, nuclear weapons could have been developed early enough to radically change the course of WWII.
Are there any particular examples of fields where “overmathematizing” demonstrably slowed down research?
How about String Theory in physics and the Gaussian Copula in finance?
Not demonstrable yet. If a correct theory of quantum gravity is found that turns out not to involve such sophisticated mathematics, then you’ll have a case for this. Right now all you have is the opinion of a few contrarians.
It should be string theorists’ job to defend string theory by actually producing something. A proposed theory is not innocent until proven guilty.
The point is, that’s not a case where we’ve actually found the way ahead, and shown that the mathematical speculation was a dead end. It’s therefore relatively weak evidence.
And given the number of times that further formalizing a field has paid off in the physical sciences, this doesn’t at all convince me that “overmathematizing” is a general problem.
In the case of quantum mechanics, the additional formalizing did certainly pay off. But the formalizing was done after the messy first version of the theory made several amazingly accurate predictions.
I thought that the simplifying assumptions, like the independence of mortgage defaults, were the root of the hidden black swan risk in the Gaussian copula. Anyway, the non-theoretical instinct trading of the early 80s led to the Savings and Loan crisis (see e.g. Liar’s Poker for an illustration of the typical attitudes), and I don’t think we have a “safer” candidate theory of financial derivatives to point to.
Sure, it’s not that the idea of the Gaussian Copula itself is wrong. It’s a mathematical theory, the only way it can be wrong is if there’s a flaw in the proof. The problem is that people were overeager to apply the GC as a description of reality. Why? Well, my belief is that it has to do with overmathematization.
Really, it’s odd that there is so much pushback against this idea. To me, it seems like a natural consequence of the Hansonian maxim that “research isn’t about progress”. People want to signal high status and affiliate with other high status folks. One way to do this is the use of gratuitous mathematics. It’s just like those annoying people who use big words to show off.
I still don’t see that you’ve demonstrated overmathematization as a hindering factor.
-Finance gurus used advanced math.
-Finance gurus made bad assumptions about the mortgage market.
How have you shown that one caused the other? What method (that you should have presented in your first post instead of dragging this out to at least four) would have led finance gurus to not make bad assumptions, and would have directed them toward less math?
I agree that it’s gotten to the point where academia adheres to standards that don’t actually maximize research progress, and too often try to look impressive at the expense of doing something truly worthwhile. But what alternate epistemology do you propose that could predictably counteract this tendency? I’m still waiting to hear it.
(And the error in assumptions was made by practitioners, where the incentive to produce meaningful results is much stronger, because they actually get a chance to be proven wrong by nature.)
I think the compression principle provides a pretty stark criterion. If a mathematical result can be used to achieve an improved compression rate on a standard empirical dataset, it’s a worthy contribution to the relevant science. If it can’t, then it still might be a good result, but it should be sent to a math journal, not a science journal.
I think the problem with overmathematization is that it adds prestige to theories while making them harder to check.
I’m guessing that people tend to think the opposite of over mathematization is hand waving. Perhaps you could talk about inappropriate mathematization. An example would things like the majority of artificial neural networks. Interesting maths and systems to be sure, but a million miles away from actual neurons.
How about examples from physics and chemistry in 1964. Or do you think Platt was wrong? What’s different this time?
This probably doesn’t quite count, but how about Einstein’s and Szilard’s relatively low esteem among their peers and consequentially low influence largely due to relatively low math ability prior to their major successes (and in Szilard’s case, even after his).
If Szilard had been more influential, nuclear weapons could have been developed early enough to radically change the course of WWII.