The main point is: given a system, we don’t actually have that many degrees of freedom in what abstractions to use in order to reason about the system. That’s a core component of myresearch: the underlying structure of a system forces certain abstraction-choices; choosing other abstractions would force us to carry around lots of extra data.
However, if we have the opportunity to design a system, then we can choose what abstraction we want and then choose the system structure to match that abstraction. The number of degrees of freedom expands dramatically.
In programming, we get to design very large chunks of the system; in math and the sciences, less so. It’s not a hard dividing line—there are design problems in the sciences and there are problem constraints in programming—but it’s still a major difference.
In general, we should expect that looking for better abstractions is much more relevant to design problems, simply because the possibility space is so much larger. For problems where the system structure is given, the structure itself dictates the abstraction choice. People do still screw up and pick “wrong” abstractions for a given system, but since the space of choices is relatively small, it takes a lot less exploration to converge to pretty good choices over time.
Alright, I think what you’re saying make more sense, and I think in principle I agree if you don’t claim the existence of a clear division between , let’s call them design problems and descriptive problems.
However it seems to me that you are partially basing this hypothesis on science being more unified than it seems to me.
I.e. if the task of physicists was to design an abstraction that fully explained the world, then I would indeed understand how that’s different from designing an abstraction that is meant to work very well for a niche set of problems such as parsing ASTs or creating encryption algorithms (aka things for which there exists specialized language and libraries).
However, it seems to me like, in practice, scientific theory is not at all unified and the few parts of it that are unified are the ones that tend to be “wrong” at a closer look and just serve as an entry point into the more “correct” and complex theories that can be used to solve relevant problems.
So if e.g. there was one theory to explain interactions in the nucleus and it was consistent with the rest of physics I would agree that maybe it’s hard to come up with another one. If there’s 5 different theories and all of them are designed for explaining specific cases and have fuzzy boundaries where they break and they kinda make sense in the wider context if you squint a bit but not that much… then that feels much closer to the way programming tools are. To me it seems like physics is much closer to the second scenario, but I’m not a physicist, so I don’t know.
Even more so, it seems that scientific theory, much like programming abstraction, is often constrained by things such as speed. I.e. a theory can be “correct” but if the computations are too complex to make (e.g. trying to simulate macromolecules using elementary-particle based simulations) than the theory is not considered for a certain set of problems. This is very similar to e.g. not using Haskell for a certain library (e.g. one that is meant to simulate elementary-particle based physics and thus requires very fast computations), even though in theory Haskell could produce simpler and easier to validate (read: with fewer bugs) code than using Fortran or C.
Let me try another explanation.
The main point is: given a system, we don’t actually have that many degrees of freedom in what abstractions to use in order to reason about the system. That’s a core component of my research: the underlying structure of a system forces certain abstraction-choices; choosing other abstractions would force us to carry around lots of extra data.
However, if we have the opportunity to design a system, then we can choose what abstraction we want and then choose the system structure to match that abstraction. The number of degrees of freedom expands dramatically.
In programming, we get to design very large chunks of the system; in math and the sciences, less so. It’s not a hard dividing line—there are design problems in the sciences and there are problem constraints in programming—but it’s still a major difference.
In general, we should expect that looking for better abstractions is much more relevant to design problems, simply because the possibility space is so much larger. For problems where the system structure is given, the structure itself dictates the abstraction choice. People do still screw up and pick “wrong” abstractions for a given system, but since the space of choices is relatively small, it takes a lot less exploration to converge to pretty good choices over time.
Alright, I think what you’re saying make more sense, and I think in principle I agree if you don’t claim the existence of a clear division between , let’s call them design problems and descriptive problems.
However it seems to me that you are partially basing this hypothesis on science being more unified than it seems to me.
I.e. if the task of physicists was to design an abstraction that fully explained the world, then I would indeed understand how that’s different from designing an abstraction that is meant to work very well for a niche set of problems such as parsing ASTs or creating encryption algorithms (aka things for which there exists specialized language and libraries).
However, it seems to me like, in practice, scientific theory is not at all unified and the few parts of it that are unified are the ones that tend to be “wrong” at a closer look and just serve as an entry point into the more “correct” and complex theories that can be used to solve relevant problems.
So if e.g. there was one theory to explain interactions in the nucleus and it was consistent with the rest of physics I would agree that maybe it’s hard to come up with another one. If there’s 5 different theories and all of them are designed for explaining specific cases and have fuzzy boundaries where they break and they kinda make sense in the wider context if you squint a bit but not that much… then that feels much closer to the way programming tools are. To me it seems like physics is much closer to the second scenario, but I’m not a physicist, so I don’t know.
Even more so, it seems that scientific theory, much like programming abstraction, is often constrained by things such as speed. I.e. a theory can be “correct” but if the computations are too complex to make (e.g. trying to simulate macromolecules using elementary-particle based simulations) than the theory is not considered for a certain set of problems. This is very similar to e.g. not using Haskell for a certain library (e.g. one that is meant to simulate elementary-particle based physics and thus requires very fast computations), even though in theory Haskell could produce simpler and easier to validate (read: with fewer bugs) code than using Fortran or C.