The opening sounds a lot like saying “aerodynamics used to be a science until people started building planes.”
The idea that an area of study is less scientific because the subject is inelegant is a blinkered view of what science is. A physicist’s view. It is one I’m deeply sympathetic to, and if your definition of science is Rutherford’s, you might be right, but a reasonable one that includes chemistry would have to include AI as well.
The idea that an area of study is less scientific because the subject is inelegant is a blinkered view of what science is.
See my reply to Bogdan here. The issue isn’t “inelegance”; we also lack an inelegant ability to predict or explain how particular ML systems do what they do.
Modern ML is less like modern chemistry, and more like ancient culinary arts and medicine. (Or “ancient culinary arts and medicine shortly after a cultural reboot”, such that we have a relatively small number of recently-developed shallow heuristics and facts to draw on, rather than centuries of hard-earned experience.)
The opening sounds a lot like saying “aerodynamics used to be a science until people started building planes.”
The reason this analogy doesn’t land for me is that I don’t think our epistemic position regarding LLMs is similar to, e.g., the Wright brothers’ epistemic position regarding heavier-than-air flight.
The point Nate was trying to make with “ML is no longer a science” wasn’t “boo current ML that actually works, yay GOFAI that didn’t work”. The point was exactly to draw a contrast between, e.g., our understanding of heavier-than-air flight and our understanding of how the human brain works. The invention of useful tech that interfaces with the brain doesn’t entail that we understand the brain’s workings in the way we’ve long understood flight; it depends on what the (actual or hypothetical) tech is.
Maybe a clearer way of phrasing it is “AI used to be failed science; now it’s (mostly, outside of a few small oases) a not-even-attempted science”. “Failed science” maybe makes it clearer that the point here isn’t to praise the old approaches that didn’t work; there’s a more nuanced point being made.
While theoretical physics is less “applied science” than chemistry, there’s still a real difference between chemistry and chemical engineering.
For context, I am a Mechanical Engineer, and while I do occasionally check the system I am designing and try to understand/verify how well it is working, I am fundamentally not doing science. The main goal is solving a practical problem (i.e. as little theoretical understanding as is sufficient), where in science the understanding is the main goal, or at least closer to it.
Certainly, I understand this science vs. engineering, pure vs. applied, fundamental vs. emergent, theoretical vs. computational vs. observational/experimental classification is fuzzy: relevant xkcd, smbc. Hell, even the math vs. physics vs. chemistry vs. biology distinctions are fuzzy!
What I am saying is that either your definition has to be so narrow as to exclude most of what is generally considered “science,” (à la Rutherford, the ironically Chemistry Nobel Laureate) or you need to exclude AI via special pleading. Specifically, my claim is that AI research is closer to physics (the simulations/computation end) than chemistry is. Admittedly, this claim is based on vibes, but if pressed, I could probably point to how many people transition from one field to the other.
Hmm, in that case maybe I misunderstood the post, my impression wasnt that he was saying AI literally isn’t a science anymore, but more that engineering work is getting too far ahead of the science part—and that in practice most ML progress now is just ML Engineering, where understanding is only a means to an end (and so is not as deep as it would be if it was science first).
I would guess that engineering gets ahead of science pretty often, but maybe in ML it’s more pronounced—hype/money investment, as well as perhaps the perceived relative low stakes (unlike aerospace, or medical robotics which is my field) not scaring the ML engineers enough to actually care about deep understanding, and also perhaps the inscrutable nature of ML—if it were easy to understand, it wouldn’t be as unappealing spend resources to do so.
I don’t really have a take on where the in elegance comes in to play here
The opening sounds a lot like saying “aerodynamics used to be a science until people started building planes.”
The idea that an area of study is less scientific because the subject is inelegant is a blinkered view of what science is. A physicist’s view. It is one I’m deeply sympathetic to, and if your definition of science is Rutherford’s, you might be right, but a reasonable one that includes chemistry would have to include AI as well.
See my reply to Bogdan here. The issue isn’t “inelegance”; we also lack an inelegant ability to predict or explain how particular ML systems do what they do.
Modern ML is less like modern chemistry, and more like ancient culinary arts and medicine. (Or “ancient culinary arts and medicine shortly after a cultural reboot”, such that we have a relatively small number of recently-developed shallow heuristics and facts to draw on, rather than centuries of hard-earned experience.)
The reason this analogy doesn’t land for me is that I don’t think our epistemic position regarding LLMs is similar to, e.g., the Wright brothers’ epistemic position regarding heavier-than-air flight.
The point Nate was trying to make with “ML is no longer a science” wasn’t “boo current ML that actually works, yay GOFAI that didn’t work”. The point was exactly to draw a contrast between, e.g., our understanding of heavier-than-air flight and our understanding of how the human brain works. The invention of useful tech that interfaces with the brain doesn’t entail that we understand the brain’s workings in the way we’ve long understood flight; it depends on what the (actual or hypothetical) tech is.
Maybe a clearer way of phrasing it is “AI used to be failed science; now it’s (mostly, outside of a few small oases) a not-even-attempted science”. “Failed science” maybe makes it clearer that the point here isn’t to praise the old approaches that didn’t work; there’s a more nuanced point being made.
While theoretical physics is less “applied science” than chemistry, there’s still a real difference between chemistry and chemical engineering.
For context, I am a Mechanical Engineer, and while I do occasionally check the system I am designing and try to understand/verify how well it is working, I am fundamentally not doing science. The main goal is solving a practical problem (i.e. as little theoretical understanding as is sufficient), where in science the understanding is the main goal, or at least closer to it.
The canonical source for this is What Engineers Know and How They Know It, though I confess to not actually reading the book myself.
Certainly, I understand this science vs. engineering, pure vs. applied, fundamental vs. emergent, theoretical vs. computational vs. observational/experimental classification is fuzzy: relevant xkcd, smbc. Hell, even the math vs. physics vs. chemistry vs. biology distinctions are fuzzy!
What I am saying is that either your definition has to be so narrow as to exclude most of what is generally considered “science,” (à la Rutherford, the ironically Chemistry Nobel Laureate) or you need to exclude AI via special pleading. Specifically, my claim is that AI research is closer to physics (the simulations/computation end) than chemistry is. Admittedly, this claim is based on vibes, but if pressed, I could probably point to how many people transition from one field to the other.
Hmm, in that case maybe I misunderstood the post, my impression wasnt that he was saying AI literally isn’t a science anymore, but more that engineering work is getting too far ahead of the science part—and that in practice most ML progress now is just ML Engineering, where understanding is only a means to an end (and so is not as deep as it would be if it was science first).
I would guess that engineering gets ahead of science pretty often, but maybe in ML it’s more pronounced—hype/money investment, as well as perhaps the perceived relative low stakes (unlike aerospace, or medical robotics which is my field) not scaring the ML engineers enough to actually care about deep understanding, and also perhaps the inscrutable nature of ML—if it were easy to understand, it wouldn’t be as unappealing spend resources to do so.
I don’t really have a take on where the in elegance comes in to play here