I am not familiar with Bachelard, but I suspect that we understand how humans make progress in understanding the world rather better by now.
But first, the very idea that formalization would have helped discover non-Euclidean geometries earlier seems counter to the empirical observation that Euclid himself formalized geometry with 5 postulates, how more formal can it get? Compared to the rest of the science of the time, it was a huge advance. He also saw that the 5th one did not fit neatly with the rest. Moreover, the non-Euclidean geometry was right there in front of him the whole time: spheres are all around. And yet the leap from a straight line to the great circle and realizing that his 4 postulates work just fine without the 5th had to wait some two millennia.
In general, what you (he?) call “suspension of intuition”, seems to me to be more like emergence of a different intuition after a lot of trying and failing. I think that the recently empirically discovered phenomenon of “grokking” in ML provides a better model of how breakthroughs in understanding happen. It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.
It is a fascinating topic, why in some cases these transitions happen quickly and in other cases take forever, and how they can be intentionally sped up. This is not necessarily grouped by field, either. Progress in fundamental physics was at breakneck speed in the first 30 years of the 20th century, then slowed down, and is at a crawl now, though glimpses of things to come are there again. Biology was stagnant since the times of Galen for a long time, then there was an age of breakthroughs. I suspect this applies to many other fields, as well. Psychology had a leap in Freud, and we are still not very far from his models.
In retrospect, it does not feel like “formalization explicitly refuses to acknowledge our intuitions of things, the rich experience we always integrate into our concepts” is an essential step. It is more like “after a new concept emerges, it feels like it emerged after someone replaced intuition with formalization”, but that was not the actual process that led to the emergence.
It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.
But in the case of hyperbolic geometry, the accumulation of “data” came from working out the consequence of varying the axioms, right? So I don’t think this necessarily contradicts the OP. We have a set of intuitions, which we can formalize and distill into axioms. Then by varying the axioms and systematically working out their consequences, we can develop new intuitions.
Well, hyperbolic geometry was counterintuitive (the relevant 2d manifold does not embed isometrically into 3d Euclidean space), but spherical was not. Euclid had all the knowledge needed to construct spherical geometry. In fact, most of it was constructed experimentally due to the need to describe the celestial sphere and to navigate the seas, it just wasn’t connected to the 4 postulates. Once this connection is made and the 5th postulate is proven independent of the first 4, but being a limit of the spherical geometry in the limit of an infinitely large sphere, at least informally, the step toward hyperbolic geometry is quite natural.
the accumulation of “data” came from working out the consequence of varying the axioms
I don’t know if that is how it worked out in this case. It does not seem to be how our understanding advances in general. In natural sciences it tends to be driven by experiment, which is enabled by a mix of knowledge and technology. The revolutionary Hodgkin and Huxley model of action potential propagation depended on advances in technology/electrophysiology (20 um silver electrodes, availability of giant squid axons), in mathematics and EE (circuit analysis) and something else, that allowed them to ask the right questions. It is not clear to me that it was a break from intuition to axioms in this case, seems unlikely.
Yeah, I think the “varying the axioms” thing makes more sense for math in particular, not so much the other sciences. As you say, the equivalent thing in the natural sciences is more like experimentation.
Maybe we can roughly unify them? In both cases, we have some domain where we understand phenomena well. Using this understanding, we develop tools that allow us to probe a new domain which we understand less well. After repeated probing, we develop an intuition/understanding of this new domain as well, allowing us to develop tools to explore further domains.
develop tools that allow us to probe a new domain which we understand less well. After repeated probing, we develop an intuition/understanding of this new domain
but I don’t see how it can be unified with the OP’s thesis.
I’m saying the study of novel mathematical structures is analogous to such probing. At first, one can only laboriously perform step-by-step deductions from the axioms, but as one does many such deductions, intuition and understanding can be developed. This is enabled by formalization.
That certainly makes sense. For example, there are quite a few abstraction steps between the Fundamental theorem of calculus and certain commutative diagrams.
First, I want to clarify that this is obviously not the only function of formalization. I feel like this might clarify a lot of the point you raise.
But first, the very idea that formalization would have helped discover non-Euclidean geometries earlier seems counter to the empirical observation that Euclid himself formalized geometry with 5 postulates, how more formal can it get? Compared to the rest of the science of the time, it was a huge advance. He also saw that the 5th one did not fit neatly with the rest. Moreover, the non-Euclidean geometry was right there in front of him the whole time: spheres are all around. And yet the leap from a straight line to the great circle and realizing that his 4 postulates work just fine without the 5th had to wait some two millennia.
So Euclid formalized our geometric intuitions, the obvious and immediate shape that make naturally sense of the universe. This use of formalization was to make more concrete and precise some concepts that we had but that were “floating around”. He did it so well that these concepts and intuition acquired an even stronger “reality” and “obviousness”: how could you question geometry when Euclid had made so tangible the first intuitions that came to your mind?
According to Bachelard, the further formalization, or rather the axiomatization of geometry, of simplifying the apparently simple concepts of points and lines to make them algebraically manipulable, was a key part in getting out of this conceptual constraint.
That being said, I’d be interested for an alternative take or evidence that this claim is wrong. ;)
In general, what you (he?) call “suspension of intuition”, seems to me to be more like emergence of a different intuition after a lot of trying and failing. I think that the recently empirically discovered phenomenon of “grokking” in ML provides a better model of how breakthroughs in understanding happen. It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.
This strike me as a false comparison/dichotomy: why can’t both be part of scientific progress? Especially in physics and chemistry (the two fields Bachelard knew best), there are many examples of productive formalization/axiomatization as suspension of intuition:
Bolzmann work that generally started from mathematical building blocks, build stuff from them, and then interpreted them. See this book for more details of this view.
Quantum Mechanics went through that phase, where the half-baked models based on classical mechanics didn’t work well enough, and so there was an effort at formalization and axiomatization that revealed the underlying structure without as much pollution by macroscopic intuition.
The potential function came from a pure mathematical and formal effort to compress the results of classical mechanics, and ended up being incorporated in the core concepts of physics.
I’ve also found out that on inspection, models of science based on the gathering of a lot of data rarely fit the actual history. Notably Kuhn’s model contradicts the history of science almost everywhere, and he makes a highly biased reading of the key historic events that he leverages.
I am not familiar with Bachelard, but I suspect that we understand how humans make progress in understanding the world rather better by now.
But first, the very idea that formalization would have helped discover non-Euclidean geometries earlier seems counter to the empirical observation that Euclid himself formalized geometry with 5 postulates, how more formal can it get? Compared to the rest of the science of the time, it was a huge advance. He also saw that the 5th one did not fit neatly with the rest. Moreover, the non-Euclidean geometry was right there in front of him the whole time: spheres are all around. And yet the leap from a straight line to the great circle and realizing that his 4 postulates work just fine without the 5th had to wait some two millennia.
In general, what you (he?) call “suspension of intuition”, seems to me to be more like emergence of a different intuition after a lot of trying and failing. I think that the recently empirically discovered phenomenon of “grokking” in ML provides a better model of how breakthroughs in understanding happen. It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.
It is a fascinating topic, why in some cases these transitions happen quickly and in other cases take forever, and how they can be intentionally sped up. This is not necessarily grouped by field, either. Progress in fundamental physics was at breakneck speed in the first 30 years of the 20th century, then slowed down, and is at a crawl now, though glimpses of things to come are there again. Biology was stagnant since the times of Galen for a long time, then there was an age of breakthroughs. I suspect this applies to many other fields, as well. Psychology had a leap in Freud, and we are still not very far from his models.
In retrospect, it does not feel like “formalization explicitly refuses to acknowledge our intuitions of things, the rich experience we always integrate into our concepts” is an essential step. It is more like “after a new concept emerges, it feels like it emerged after someone replaced intuition with formalization”, but that was not the actual process that led to the emergence.
But in the case of hyperbolic geometry, the accumulation of “data” came from working out the consequence of varying the axioms, right? So I don’t think this necessarily contradicts the OP. We have a set of intuitions, which we can formalize and distill into axioms. Then by varying the axioms and systematically working out their consequences, we can develop new intuitions.
Well, hyperbolic geometry was counterintuitive (the relevant 2d manifold does not embed isometrically into 3d Euclidean space), but spherical was not. Euclid had all the knowledge needed to construct spherical geometry. In fact, most of it was constructed experimentally due to the need to describe the celestial sphere and to navigate the seas, it just wasn’t connected to the 4 postulates. Once this connection is made and the 5th postulate is proven independent of the first 4, but being a limit of the spherical geometry in the limit of an infinitely large sphere, at least informally, the step toward hyperbolic geometry is quite natural.
I don’t know if that is how it worked out in this case. It does not seem to be how our understanding advances in general. In natural sciences it tends to be driven by experiment, which is enabled by a mix of knowledge and technology. The revolutionary Hodgkin and Huxley model of action potential propagation depended on advances in technology/electrophysiology (20 um silver electrodes, availability of giant squid axons), in mathematics and EE (circuit analysis) and something else, that allowed them to ask the right questions. It is not clear to me that it was a break from intuition to axioms in this case, seems unlikely.
Yeah, I think the “varying the axioms” thing makes more sense for math in particular, not so much the other sciences. As you say, the equivalent thing in the natural sciences is more like experimentation.
Maybe we can roughly unify them? In both cases, we have some domain where we understand phenomena well. Using this understanding, we develop tools that allow us to probe a new domain which we understand less well. After repeated probing, we develop an intuition/understanding of this new domain as well, allowing us to develop tools to explore further domains.
There is definitely the step of
but I don’t see how it can be unified with the OP’s thesis.
I’m saying the study of novel mathematical structures is analogous to such probing. At first, one can only laboriously perform step-by-step deductions from the axioms, but as one does many such deductions, intuition and understanding can be developed. This is enabled by formalization.
That certainly makes sense. For example, there are quite a few abstraction steps between the Fundamental theorem of calculus and certain commutative diagrams.
Thanks for your thoughtful comment!
First, I want to clarify that this is obviously not the only function of formalization. I feel like this might clarify a lot of the point you raise.
So Euclid formalized our geometric intuitions, the obvious and immediate shape that make naturally sense of the universe. This use of formalization was to make more concrete and precise some concepts that we had but that were “floating around”. He did it so well that these concepts and intuition acquired an even stronger “reality” and “obviousness”: how could you question geometry when Euclid had made so tangible the first intuitions that came to your mind?
According to Bachelard, the further formalization, or rather the axiomatization of geometry, of simplifying the apparently simple concepts of points and lines to make them algebraically manipulable, was a key part in getting out of this conceptual constraint.
That being said, I’d be interested for an alternative take or evidence that this claim is wrong. ;)
This strike me as a false comparison/dichotomy: why can’t both be part of scientific progress? Especially in physics and chemistry (the two fields Bachelard knew best), there are many examples of productive formalization/axiomatization as suspension of intuition:
Bolzmann work that generally started from mathematical building blocks, build stuff from them, and then interpreted them. See this book for more details of this view.
Quantum Mechanics went through that phase, where the half-baked models based on classical mechanics didn’t work well enough, and so there was an effort at formalization and axiomatization that revealed the underlying structure without as much pollution by macroscopic intuition.
The potential function came from a pure mathematical and formal effort to compress the results of classical mechanics, and ended up being incorporated in the core concepts of physics.
I’ve also found out that on inspection, models of science based on the gathering of a lot of data rarely fit the actual history. Notably Kuhn’s model contradicts the history of science almost everywhere, and he makes a highly biased reading of the key historic events that he leverages.