Ultimately, this is a picture of “gears-level science”: look for mediation, hunt down sources of randomness, rule out the influence of all the other variables in the universe.
I’m very doubtful that hunting down sources of randomness is a good way to go about doing science where there’s a big solution space.
There’s a lot of human pattern matching involved in coming up with good hypothesis to test.
I think you’re pointing to the same issue which Adam Zerner was pointing to. Hunting down sources of randomness is a good goal when doing science, but that doesn’t tell us much about how to go about the hunt when the solution space is very large.
It sort of feels like switching the perspectives back and forth between searching for what works at all and searching for things to rule out is analogous to research and development. Iterating between them feels like how knowledge would be refined.
Also: imagining science as “optimizing from zero” is aesthetically pleasing to me.
Fleshing this out a bit more, within the framework of this comment: when we can consistently predict some outcomes using only a handful of variables, we’ve learned a (low-dimensional) constraint on the behavior of the world. For instance, the gas law PV = nRT is a constraint on the relationship between variables in a low-dimensional summary of a high-dimensional gas. (More precisely, it’s a template for generating low-dimensional constraints on the summary variables of many different high-dimensional gases.)
When we flip perspective to problems of design (e.g. engineering), those constraints provide the structure of our problem—analogous to the walls in a maze. We look for “paths in the maze”—i.e. designs—which satisfy the constraints. Duality says that those designs act as constraints when searching for new constraints (i.e. doing science). If engineers build some gadget that works, then that lets us rule out some constraints: any constraints which would prevent the gadget from working must be wrong.
Data serves a similar role (echoing your comment here). If we observe some behavior, then that provides a constraint when searching for new constraints. Data and working gadgets live “in the same space”—the space of “paths”: things which definitely do work in the world and therefore cannot be ruled out by constraints.
You know, I had never explicitly considered that data and devices would be in the same abstract space, but as soon as I read the words it was obvious. Thank you for that!
Building devices is actually like setting up physical experiments. If they do what they’re supposed to do, you can increase your confidence in the mechanisms that explain how they work.
I’m very doubtful that hunting down sources of randomness is a good way to go about doing science where there’s a big solution space.
There’s a lot of human pattern matching involved in coming up with good hypothesis to test.
I think you’re pointing to the same issue which Adam Zerner was pointing to. Hunting down sources of randomness is a good goal when doing science, but that doesn’t tell us much about how to go about the hunt when the solution space is very large.
It sort of feels like switching the perspectives back and forth between searching for what works at all and searching for things to rule out is analogous to research and development. Iterating between them feels like how knowledge would be refined.
Also: imagining science as “optimizing from zero” is aesthetically pleasing to me.
Fleshing this out a bit more, within the framework of this comment: when we can consistently predict some outcomes using only a handful of variables, we’ve learned a (low-dimensional) constraint on the behavior of the world. For instance, the gas law PV = nRT is a constraint on the relationship between variables in a low-dimensional summary of a high-dimensional gas. (More precisely, it’s a template for generating low-dimensional constraints on the summary variables of many different high-dimensional gases.)
When we flip perspective to problems of design (e.g. engineering), those constraints provide the structure of our problem—analogous to the walls in a maze. We look for “paths in the maze”—i.e. designs—which satisfy the constraints. Duality says that those designs act as constraints when searching for new constraints (i.e. doing science). If engineers build some gadget that works, then that lets us rule out some constraints: any constraints which would prevent the gadget from working must be wrong.
Data serves a similar role (echoing your comment here). If we observe some behavior, then that provides a constraint when searching for new constraints. Data and working gadgets live “in the same space”—the space of “paths”: things which definitely do work in the world and therefore cannot be ruled out by constraints.
You know, I had never explicitly considered that data and devices would be in the same abstract space, but as soon as I read the words it was obvious. Thank you for that!
Building devices is actually like setting up physical experiments. If they do what they’re supposed to do, you can increase your confidence in the mechanisms that explain how they work.
In the realm of biology, I think hunting for patterns and especially those you care about is a better way then hunting for randomness.
Many times randomness is the result of complex interactions that can’t easily be reduced.