The idea that an area of study is less scientific because the subject is inelegant is a blinkered view of what science is.
See my reply to Bogdan here. The issue isn’t “inelegance”; we also lack an inelegant ability to predict or explain how particular ML systems do what they do.
Modern ML is less like modern chemistry, and more like ancient culinary arts and medicine. (Or “ancient culinary arts and medicine shortly after a cultural reboot”, such that we have a relatively small number of recently-developed shallow heuristics and facts to draw on, rather than centuries of hard-earned experience.)
The opening sounds a lot like saying “aerodynamics used to be a science until people started building planes.”
The reason this analogy doesn’t land for me is that I don’t think our epistemic position regarding LLMs is similar to, e.g., the Wright brothers’ epistemic position regarding heavier-than-air flight.
The point Nate was trying to make with “ML is no longer a science” wasn’t “boo current ML that actually works, yay GOFAI that didn’t work”. The point was exactly to draw a contrast between, e.g., our understanding of heavier-than-air flight and our understanding of how the human brain works. The invention of useful tech that interfaces with the brain doesn’t entail that we understand the brain’s workings in the way we’ve long understood flight; it depends on what the (actual or hypothetical) tech is.
Maybe a clearer way of phrasing it is “AI used to be failed science; now it’s (mostly, outside of a few small oases) a not-even-attempted science”. “Failed science” maybe makes it clearer that the point here isn’t to praise the old approaches that didn’t work; there’s a more nuanced point being made.
See my reply to Bogdan here. The issue isn’t “inelegance”; we also lack an inelegant ability to predict or explain how particular ML systems do what they do.
Modern ML is less like modern chemistry, and more like ancient culinary arts and medicine. (Or “ancient culinary arts and medicine shortly after a cultural reboot”, such that we have a relatively small number of recently-developed shallow heuristics and facts to draw on, rather than centuries of hard-earned experience.)
The reason this analogy doesn’t land for me is that I don’t think our epistemic position regarding LLMs is similar to, e.g., the Wright brothers’ epistemic position regarding heavier-than-air flight.
The point Nate was trying to make with “ML is no longer a science” wasn’t “boo current ML that actually works, yay GOFAI that didn’t work”. The point was exactly to draw a contrast between, e.g., our understanding of heavier-than-air flight and our understanding of how the human brain works. The invention of useful tech that interfaces with the brain doesn’t entail that we understand the brain’s workings in the way we’ve long understood flight; it depends on what the (actual or hypothetical) tech is.
Maybe a clearer way of phrasing it is “AI used to be failed science; now it’s (mostly, outside of a few small oases) a not-even-attempted science”. “Failed science” maybe makes it clearer that the point here isn’t to praise the old approaches that didn’t work; there’s a more nuanced point being made.