There is a major difference between programming and math/science with respect to abstraction: in programming, we don’t just get to choose the abstraction, we get to design the system to match that abstraction. In math and the sciences, we don’t get to choose the structure of the underlying system; the only choice we have is in how to model it.
The way I’d choose to think about it is more like:
1. Language, libraries …etc are abstractions under an underlying system (some sort of imperfect Turing machine), that programmers don’t have much control over
2. Code is an abstraction over a real world problem meant to regorize-it to the point where it can be executed by a computer (much like math in e.g. physics is an abstraction meant to do… exactly the same thing, nowadays)
Granted, what the “immutable reality” and the “abstraction” are depends on who’s view you take.
The main issue is that reality has structure (especially causal structure), and we don’t get to choose that structure.
Again, I think we do get to chose structure. If your requirement is e.g. building a search engine and one of the abstractions you chose is “the bit that stores all the data for fast querying”, because that more or less interacts with the rest only through a few well defined channels, then that is exactly like your cell biology analogy, for example.
To draw a proper analogy between abstraction-choice in biology and programming: imagine that you were performing reverse compilation. You take in assembly code, and attempt to provide equivalent, maximally-human-readable code in some other language. That’s basically the right analogy for abstraction-choice in biology.
Ok, granted, but programmers literally write abstractions to do just that when they write code for reverse engineering… and as far as I’m aware the abstractions we have work quite well for it and people doing reverse engineering have the same abstraction-choosing and creating rules every other programmer has.
Picture that, and hopefully it’s clear that there are far fewer degrees of freedom in the choice of abstraction, compared to normal programming problems. That’s why people in math/science don’t experiment with alternative abstractions very often compared to programming: there just aren’t that many options which make any sense at all. That’s not to say that progress isn’t made from time to time; Feynman’s formulation of quantum mechanics was a big step forward. But there’s not a whole continuum of similarly-decent formulations of quantum mechanics like there is a continuum of similarly-decent programming languages; the abstraction choice is much more constrained
I mean, this is what the problem boils down to at the end of the day, nr of degrees of freedom you have to work with, but the fact that sciences have few of them seems non obvious to me.
Again, keep in mind that programmers also work within constraints, sometimes very very very tight constraints, e.g. a banking software’s requirements are much stricter (if simpler) than those of a theory that explains RNA Polymerase binding affinity to various sites.
It seems that you are trying to imply there’s something fundamentally different between the degrees of freedom in programming and those in science, but I’m not sure I can quite make it out from your comment.
The main point is: given a system, we don’t actually have that many degrees of freedom in what abstractions to use in order to reason about the system. That’s a core component of myresearch: the underlying structure of a system forces certain abstraction-choices; choosing other abstractions would force us to carry around lots of extra data.
However, if we have the opportunity to design a system, then we can choose what abstraction we want and then choose the system structure to match that abstraction. The number of degrees of freedom expands dramatically.
In programming, we get to design very large chunks of the system; in math and the sciences, less so. It’s not a hard dividing line—there are design problems in the sciences and there are problem constraints in programming—but it’s still a major difference.
In general, we should expect that looking for better abstractions is much more relevant to design problems, simply because the possibility space is so much larger. For problems where the system structure is given, the structure itself dictates the abstraction choice. People do still screw up and pick “wrong” abstractions for a given system, but since the space of choices is relatively small, it takes a lot less exploration to converge to pretty good choices over time.
Alright, I think what you’re saying make more sense, and I think in principle I agree if you don’t claim the existence of a clear division between , let’s call them design problems and descriptive problems.
However it seems to me that you are partially basing this hypothesis on science being more unified than it seems to me.
I.e. if the task of physicists was to design an abstraction that fully explained the world, then I would indeed understand how that’s different from designing an abstraction that is meant to work very well for a niche set of problems such as parsing ASTs or creating encryption algorithms (aka things for which there exists specialized language and libraries).
However, it seems to me like, in practice, scientific theory is not at all unified and the few parts of it that are unified are the ones that tend to be “wrong” at a closer look and just serve as an entry point into the more “correct” and complex theories that can be used to solve relevant problems.
So if e.g. there was one theory to explain interactions in the nucleus and it was consistent with the rest of physics I would agree that maybe it’s hard to come up with another one. If there’s 5 different theories and all of them are designed for explaining specific cases and have fuzzy boundaries where they break and they kinda make sense in the wider context if you squint a bit but not that much… then that feels much closer to the way programming tools are. To me it seems like physics is much closer to the second scenario, but I’m not a physicist, so I don’t know.
Even more so, it seems that scientific theory, much like programming abstraction, is often constrained by things such as speed. I.e. a theory can be “correct” but if the computations are too complex to make (e.g. trying to simulate macromolecules using elementary-particle based simulations) than the theory is not considered for a certain set of problems. This is very similar to e.g. not using Haskell for a certain library (e.g. one that is meant to simulate elementary-particle based physics and thus requires very fast computations), even though in theory Haskell could produce simpler and easier to validate (read: with fewer bugs) code than using Fortran or C.
The way I’d choose to think about it is more like:
1. Language, libraries …etc are abstractions under an underlying system (some sort of imperfect Turing machine), that programmers don’t have much control over
2. Code is an abstraction over a real world problem meant to regorize-it to the point where it can be executed by a computer (much like math in e.g. physics is an abstraction meant to do… exactly the same thing, nowadays)
Granted, what the “immutable reality” and the “abstraction” are depends on who’s view you take.
Again, I think we do get to chose structure. If your requirement is e.g. building a search engine and one of the abstractions you chose is “the bit that stores all the data for fast querying”, because that more or less interacts with the rest only through a few well defined channels, then that is exactly like your cell biology analogy, for example.
Ok, granted, but programmers literally write abstractions to do just that when they write code for reverse engineering… and as far as I’m aware the abstractions we have work quite well for it and people doing reverse engineering have the same abstraction-choosing and creating rules every other programmer has.
I mean, this is what the problem boils down to at the end of the day, nr of degrees of freedom you have to work with, but the fact that sciences have few of them seems non obvious to me.
Again, keep in mind that programmers also work within constraints, sometimes very very very tight constraints, e.g. a banking software’s requirements are much stricter (if simpler) than those of a theory that explains RNA Polymerase binding affinity to various sites.
It seems that you are trying to imply there’s something fundamentally different between the degrees of freedom in programming and those in science, but I’m not sure I can quite make it out from your comment.
Let me try another explanation.
The main point is: given a system, we don’t actually have that many degrees of freedom in what abstractions to use in order to reason about the system. That’s a core component of my research: the underlying structure of a system forces certain abstraction-choices; choosing other abstractions would force us to carry around lots of extra data.
However, if we have the opportunity to design a system, then we can choose what abstraction we want and then choose the system structure to match that abstraction. The number of degrees of freedom expands dramatically.
In programming, we get to design very large chunks of the system; in math and the sciences, less so. It’s not a hard dividing line—there are design problems in the sciences and there are problem constraints in programming—but it’s still a major difference.
In general, we should expect that looking for better abstractions is much more relevant to design problems, simply because the possibility space is so much larger. For problems where the system structure is given, the structure itself dictates the abstraction choice. People do still screw up and pick “wrong” abstractions for a given system, but since the space of choices is relatively small, it takes a lot less exploration to converge to pretty good choices over time.
Alright, I think what you’re saying make more sense, and I think in principle I agree if you don’t claim the existence of a clear division between , let’s call them design problems and descriptive problems.
However it seems to me that you are partially basing this hypothesis on science being more unified than it seems to me.
I.e. if the task of physicists was to design an abstraction that fully explained the world, then I would indeed understand how that’s different from designing an abstraction that is meant to work very well for a niche set of problems such as parsing ASTs or creating encryption algorithms (aka things for which there exists specialized language and libraries).
However, it seems to me like, in practice, scientific theory is not at all unified and the few parts of it that are unified are the ones that tend to be “wrong” at a closer look and just serve as an entry point into the more “correct” and complex theories that can be used to solve relevant problems.
So if e.g. there was one theory to explain interactions in the nucleus and it was consistent with the rest of physics I would agree that maybe it’s hard to come up with another one. If there’s 5 different theories and all of them are designed for explaining specific cases and have fuzzy boundaries where they break and they kinda make sense in the wider context if you squint a bit but not that much… then that feels much closer to the way programming tools are. To me it seems like physics is much closer to the second scenario, but I’m not a physicist, so I don’t know.
Even more so, it seems that scientific theory, much like programming abstraction, is often constrained by things such as speed. I.e. a theory can be “correct” but if the computations are too complex to make (e.g. trying to simulate macromolecules using elementary-particle based simulations) than the theory is not considered for a certain set of problems. This is very similar to e.g. not using Haskell for a certain library (e.g. one that is meant to simulate elementary-particle based physics and thus requires very fast computations), even though in theory Haskell could produce simpler and easier to validate (read: with fewer bugs) code than using Fortran or C.