I think that there’s a sliding scale between a black-box and a gears-level model; any gears-level model has black box components, and a mostly black-box model may include gears.
E.g. if you experimentally arrive at a physics equation that correctly describes how the wheel-with-weights behaves under a wide variety of parameters, this is more gearsy than just knowing the right settings for one set of parameters. But the deeper laws of physics which generated that equation are still a black box. While you might know how to adjust the weights if the slope changes, you won’t know how you should adjust them if fundamental physical constants were to change.
(Setting aside the point that fundamental physics constants changing would break your body so you couldn’t adjust the weights because you would be dead anyway.)
To put it in different terms, in a black box model you take some things as axiomatic. Any kind of reasoning requires you to eventually fall back on axioms that are not justified further, so all models are at least somewhat black boxy. The difference is in whether you settle on axioms that are useful for a narrow set of circumstances, or on ones which allow for broader generalization.
I think you’re pointing to a true and useful thing, but “sliding scale” isn’t quite the right way to characterize it. Rather, I’d say that we’re always operating at some level(s) of abstraction, and there’s always a lowest abstraction level in our model—a ground-level abstraction, in which the pieces are atomic. For a black-box method, the ground-level abstraction just has the one monolithic black box in it.
A gearsy method has more than just one object in its ground-level abstraction. There’s some freedom in how deep the abstraction goes—we could say a gear is atomic, or we could go all the way down to atoms—and the objects at the bottom will always be treated as black boxes. But I’d say it’s not quite right to think of the model as “partially black-box” just because the bottom-level objects are atomic; it’s usually the top-level breakdown that matters. E.g., in the maze example from the post, the top and bottom halves of the maze are still atomic black boxes, but our gearsy insight is still 100% gearsy—it is an insight which will not ever apply to some random black box in the wild.
Gears/no gears is a binary distinction; there’s a big qualitative jump between a black-box method which uses no information about internal system structure, and a gearsy model which uses any information about internal structure (even just very simple information). We can add more gears, reduce the black-box components in a gears level model. But as soon as we make the very first jump from one monolithic black box to two atomic gears, we’ve gone from a black-box method which applies to any random system, to a gears-level investment which will pay out on our particular system and systems related to it.
I think that there’s a sliding scale between a black-box and a gears-level model; any gears-level model has black box components, and a mostly black-box model may include gears.
E.g. if you experimentally arrive at a physics equation that correctly describes how the wheel-with-weights behaves under a wide variety of parameters, this is more gearsy than just knowing the right settings for one set of parameters. But the deeper laws of physics which generated that equation are still a black box. While you might know how to adjust the weights if the slope changes, you won’t know how you should adjust them if fundamental physical constants were to change.
(Setting aside the point that fundamental physics constants changing would break your body so you couldn’t adjust the weights because you would be dead anyway.)
To put it in different terms, in a black box model you take some things as axiomatic. Any kind of reasoning requires you to eventually fall back on axioms that are not justified further, so all models are at least somewhat black boxy. The difference is in whether you settle on axioms that are useful for a narrow set of circumstances, or on ones which allow for broader generalization.
I think you’re pointing to a true and useful thing, but “sliding scale” isn’t quite the right way to characterize it. Rather, I’d say that we’re always operating at some level(s) of abstraction, and there’s always a lowest abstraction level in our model—a ground-level abstraction, in which the pieces are atomic. For a black-box method, the ground-level abstraction just has the one monolithic black box in it.
A gearsy method has more than just one object in its ground-level abstraction. There’s some freedom in how deep the abstraction goes—we could say a gear is atomic, or we could go all the way down to atoms—and the objects at the bottom will always be treated as black boxes. But I’d say it’s not quite right to think of the model as “partially black-box” just because the bottom-level objects are atomic; it’s usually the top-level breakdown that matters. E.g., in the maze example from the post, the top and bottom halves of the maze are still atomic black boxes, but our gearsy insight is still 100% gearsy—it is an insight which will not ever apply to some random black box in the wild.
Gears/no gears is a binary distinction; there’s a big qualitative jump between a black-box method which uses no information about internal system structure, and a gearsy model which uses any information about internal structure (even just very simple information). We can add more gears, reduce the black-box components in a gears level model. But as soon as we make the very first jump from one monolithic black box to two atomic gears, we’ve gone from a black-box method which applies to any random system, to a gears-level investment which will pay out on our particular system and systems related to it.