Physics has existed for hundreds of years. Why can you reach the frontier of knowledge with just a few years of study? Think of all the thousands of insights and ideas and breakthroughs that have been had—yet, I do not imagine you need most of those to grasp modern consensus.
Idea 1: the tech tree is rather horizontal—for any given question, several approaches and frames are tried. Some are inevitably more attractive or useful. You can view a Markov decision process in several ways—through the Bellman equations, through the structure of the state visitation distribution functions, through the environment’s topology, through Markov chains induced by different policies. Almost everyone thinks about them in terms of Bellman equations, there were thousands of papers on that frame pre-2010, and you don’t need to know most of them to understand how deep Q-learning works.
Idea 2: some “insights” are wrong (phlogiston) or approximate (Newtonian mechanics) and so are later discarded. The insights become historical curiosities and/or pedagogical tools and/or numerical approximations of a deeper phenomenon.
Idea 3: most work is on narrow questions which end up being dead-ends or not generalizing. As a dumb example, I could construct increasingly precise torsion balance pendulums, in order to measure the mass of my copy of Dune to increasing accuracies. I would be learning new facts about the world using a rigorous and accepted methodology. But no one would care.
More realistically, perhaps only a few other algorithms researchers care about my refinement of a specialized sorting algorithm (from O(n1.1logn) to O(n1.05logn)), but the contribution is still quite publishable and legible.
I’m not sure what publishing incentives were like before the second half of the 20th century, so perhaps this kind of research was less incentivized in the past.
Could this depend on your definition of “physics”? Like, if you use a narrow definition like “general relativity + quantum mechanics”, you can learn that in a few years. But if you include things like electricity, expansion of universe, fluid mechanics, particle physics, superconductors, optics, string theory, acoustics, aerodynamics… most of them may be relatively simple to learn, but all of them together it’s too much.
Maybe. I don’t feel like that’s the key thing I’m trying to point at here, though. The fact that you can understand any one of those in a reasonable amount of time is still surprising, if you step back far enough.
Physics has existed for hundreds of years. Why can you reach the frontier of knowledge with just a few years of study? Think of all the thousands of insights and ideas and breakthroughs that have been had—yet, I do not imagine you need most of those to grasp modern consensus.
Idea 1: the tech tree is rather horizontal—for any given question, several approaches and frames are tried. Some are inevitably more attractive or useful. You can view a Markov decision process in several ways—through the Bellman equations, through the structure of the state visitation distribution functions, through the environment’s topology, through Markov chains induced by different policies. Almost everyone thinks about them in terms of Bellman equations, there were thousands of papers on that frame pre-2010, and you don’t need to know most of them to understand how deep Q-learning works.
Idea 2: some “insights” are wrong (phlogiston) or approximate (Newtonian mechanics) and so are later discarded. The insights become historical curiosities and/or pedagogical tools and/or numerical approximations of a deeper phenomenon.
Idea 3: most work is on narrow questions which end up being dead-ends or not generalizing. As a dumb example, I could construct increasingly precise torsion balance pendulums, in order to measure the mass of my copy of Dune to increasing accuracies. I would be learning new facts about the world using a rigorous and accepted methodology. But no one would care.
More realistically, perhaps only a few other algorithms researchers care about my refinement of a specialized sorting algorithm (from O(n1.1logn) to O(n1.05logn)), but the contribution is still quite publishable and legible.
I’m not sure what publishing incentives were like before the second half of the 20th century, so perhaps this kind of research was less incentivized in the past.
Could this depend on your definition of “physics”? Like, if you use a narrow definition like “general relativity + quantum mechanics”, you can learn that in a few years. But if you include things like electricity, expansion of universe, fluid mechanics, particle physics, superconductors, optics, string theory, acoustics, aerodynamics… most of them may be relatively simple to learn, but all of them together it’s too much.
Maybe. I don’t feel like that’s the key thing I’m trying to point at here, though. The fact that you can understand any one of those in a reasonable amount of time is still surprising, if you step back far enough.