I think this certainly describes a type of gears level work scientists engage in, but not the only type, nor necessarily the most common one in a given field. There’s also model building, for example.
Even once you’ve figured out which dozen variables you need to control to get a sled to move at the same speed every time, you still can’t predict what that speed would be if you set these dozen variables to different values. You’ve got to figure out Newton’s laws of motion and friction before you can do that.
Finding out which variables are relevant to a phenomenon in the first place is usually a required initial step for building a predictive model, but it’s not the only step, nor necessarily the hardest one.
Another type of widespread scientific work I can think of is facilitating efficient calculation. Even if you have a deterministic model that you’re pretty sure could theoretically predict a class of phenomena perfectly, that doesn’t mean you have the computing power necessary to actually use it.
Lattice Quantum Chromodynamics should theoretically be able to predict all of nuclear physics, but employing it in practice requires coming up with all sorts of ingenuous tricks and effective theories to reduce the computing power required for a given calculation. It’s enough to have kept a whole scientific field busy for over fifty years, and we’re still not close to actually being able to freely simulate every interaction of nucleons at the quark level from scratch.
Even once you’ve figured out which dozen variables you need to control to get a sled to move at the same speed every time, you still can’t predict what that speed would be if you set these dozen variables to different values. You’ve got to figure out Newton’s laws of motion and friction before you can do that.
Finding out which variables are relevant to a phenomenon in the first place is usually a required initial step for building a predictive model...
Exactly correct.
Part of the implicit argument of the post is that the “figure out the dozen or so relevant variables” is the “hard” step in a big-O sense, when the number of variables in the universe is large. This is for largely similar reasons to those in Everyday Lessons From High-Dimensional Optimization: in low dimensions, brute force-ish methods are tractable. Thus we get things like e.g. tables of reaction rate constants. Before we had the law of mass action, there were too many variables potentially relevant to reaction rates to predict via brute force. But once we have mass action, there are few enough degrees of freedom that we can just try them out and make these tables of reaction constants.
Now, that still leaves the step of going from “temperature and concentrations are the relevant variables” to the law of mass action, but again, that’s the sort of thing where brute-force-ish exploration works pretty well. There is an insight step involved there, but it can largely be done by guess-and-check. And even before that insight is found, there’s few enough variables involved that “make a giant table” is largely tractable.
Another type of widespread scientific work I can think of is facilitating efficient calculation...
To clarify, my point was that at least in my experience, this isn’t always the hard step. I can easily see that being the case in a “top-down” field, like a lot of engineering, medicine, parts of material science, biology and similar things. There, my impression is that once you’ve figured out what a phenomenon is all about, it often really is as simple as fitting some polynomial of your dozen variables to the data.
But in some areas, like fundamental physics, which I’m involved in, building your model isn’t that easy or straightforward. For example, we’ve been looking for a theory of quantum gravity for ages. We know roughly what sort of variables it should involve. We know what data we want it to explain. But still, actually formulating that theory has proven hellishly difficult. We’ve been on it for over fifty years now and we’re still not anywhere close to real success.
I think this certainly describes a type of gears level work scientists engage in, but not the only type, nor necessarily the most common one in a given field. There’s also model building, for example.
Even once you’ve figured out which dozen variables you need to control to get a sled to move at the same speed every time, you still can’t predict what that speed would be if you set these dozen variables to different values. You’ve got to figure out Newton’s laws of motion and friction before you can do that.
Finding out which variables are relevant to a phenomenon in the first place is usually a required initial step for building a predictive model, but it’s not the only step, nor necessarily the hardest one.
Another type of widespread scientific work I can think of is facilitating efficient calculation. Even if you have a deterministic model that you’re pretty sure could theoretically predict a class of phenomena perfectly, that doesn’t mean you have the computing power necessary to actually use it.
Lattice Quantum Chromodynamics should theoretically be able to predict all of nuclear physics, but employing it in practice requires coming up with all sorts of ingenuous tricks and effective theories to reduce the computing power required for a given calculation. It’s enough to have kept a whole scientific field busy for over fifty years, and we’re still not close to actually being able to freely simulate every interaction of nucleons at the quark level from scratch.
Exactly correct.
Part of the implicit argument of the post is that the “figure out the dozen or so relevant variables” is the “hard” step in a big-O sense, when the number of variables in the universe is large. This is for largely similar reasons to those in Everyday Lessons From High-Dimensional Optimization: in low dimensions, brute force-ish methods are tractable. Thus we get things like e.g. tables of reaction rate constants. Before we had the law of mass action, there were too many variables potentially relevant to reaction rates to predict via brute force. But once we have mass action, there are few enough degrees of freedom that we can just try them out and make these tables of reaction constants.
Now, that still leaves the step of going from “temperature and concentrations are the relevant variables” to the law of mass action, but again, that’s the sort of thing where brute-force-ish exploration works pretty well. There is an insight step involved there, but it can largely be done by guess-and-check. And even before that insight is found, there’s few enough variables involved that “make a giant table” is largely tractable.
Good example.
To clarify, my point was that at least in my experience, this isn’t always the hard step. I can easily see that being the case in a “top-down” field, like a lot of engineering, medicine, parts of material science, biology and similar things. There, my impression is that once you’ve figured out what a phenomenon is all about, it often really is as simple as fitting some polynomial of your dozen variables to the data.
But in some areas, like fundamental physics, which I’m involved in, building your model isn’t that easy or straightforward. For example, we’ve been looking for a theory of quantum gravity for ages. We know roughly what sort of variables it should involve. We know what data we want it to explain. But still, actually formulating that theory has proven hellishly difficult. We’ve been on it for over fifty years now and we’re still not anywhere close to real success.