First, I want to clarify that this is obviously not the only function of formalization. I feel like this might clarify a lot of the point you raise.
But first, the very idea that formalization would have helped discover non-Euclidean geometries earlier seems counter to the empirical observation that Euclid himself formalized geometry with 5 postulates, how more formal can it get? Compared to the rest of the science of the time, it was a huge advance. He also saw that the 5th one did not fit neatly with the rest. Moreover, the non-Euclidean geometry was right there in front of him the whole time: spheres are all around. And yet the leap from a straight line to the great circle and realizing that his 4 postulates work just fine without the 5th had to wait some two millennia.
So Euclid formalized our geometric intuitions, the obvious and immediate shape that make naturally sense of the universe. This use of formalization was to make more concrete and precise some concepts that we had but that were “floating around”. He did it so well that these concepts and intuition acquired an even stronger “reality” and “obviousness”: how could you question geometry when Euclid had made so tangible the first intuitions that came to your mind?
According to Bachelard, the further formalization, or rather the axiomatization of geometry, of simplifying the apparently simple concepts of points and lines to make them algebraically manipulable, was a key part in getting out of this conceptual constraint.
That being said, I’d be interested for an alternative take or evidence that this claim is wrong. ;)
In general, what you (he?) call “suspension of intuition”, seems to me to be more like emergence of a different intuition after a lot of trying and failing. I think that the recently empirically discovered phenomenon of “grokking” in ML provides a better model of how breakthroughs in understanding happen. It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.
This strike me as a false comparison/dichotomy: why can’t both be part of scientific progress? Especially in physics and chemistry (the two fields Bachelard knew best), there are many examples of productive formalization/axiomatization as suspension of intuition:
Bolzmann work that generally started from mathematical building blocks, build stuff from them, and then interpreted them. See this book for more details of this view.
Quantum Mechanics went through that phase, where the half-baked models based on classical mechanics didn’t work well enough, and so there was an effort at formalization and axiomatization that revealed the underlying structure without as much pollution by macroscopic intuition.
The potential function came from a pure mathematical and formal effort to compress the results of classical mechanics, and ended up being incorporated in the core concepts of physics.
I’ve also found out that on inspection, models of science based on the gathering of a lot of data rarely fit the actual history. Notably Kuhn’s model contradicts the history of science almost everywhere, and he makes a highly biased reading of the key historic events that he leverages.
Thanks for your thoughtful comment!
First, I want to clarify that this is obviously not the only function of formalization. I feel like this might clarify a lot of the point you raise.
So Euclid formalized our geometric intuitions, the obvious and immediate shape that make naturally sense of the universe. This use of formalization was to make more concrete and precise some concepts that we had but that were “floating around”. He did it so well that these concepts and intuition acquired an even stronger “reality” and “obviousness”: how could you question geometry when Euclid had made so tangible the first intuitions that came to your mind?
According to Bachelard, the further formalization, or rather the axiomatization of geometry, of simplifying the apparently simple concepts of points and lines to make them algebraically manipulable, was a key part in getting out of this conceptual constraint.
That being said, I’d be interested for an alternative take or evidence that this claim is wrong. ;)
This strike me as a false comparison/dichotomy: why can’t both be part of scientific progress? Especially in physics and chemistry (the two fields Bachelard knew best), there are many examples of productive formalization/axiomatization as suspension of intuition:
Bolzmann work that generally started from mathematical building blocks, build stuff from them, and then interpreted them. See this book for more details of this view.
Quantum Mechanics went through that phase, where the half-baked models based on classical mechanics didn’t work well enough, and so there was an effort at formalization and axiomatization that revealed the underlying structure without as much pollution by macroscopic intuition.
The potential function came from a pure mathematical and formal effort to compress the results of classical mechanics, and ended up being incorporated in the core concepts of physics.
I’ve also found out that on inspection, models of science based on the gathering of a lot of data rarely fit the actual history. Notably Kuhn’s model contradicts the history of science almost everywhere, and he makes a highly biased reading of the key historic events that he leverages.