I think you’re pointing to a true and useful thing, but “sliding scale” isn’t quite the right way to characterize it. Rather, I’d say that we’re always operating at some level(s) of abstraction, and there’s always a lowest abstraction level in our model—a ground-level abstraction, in which the pieces are atomic. For a black-box method, the ground-level abstraction just has the one monolithic black box in it.
A gearsy method has more than just one object in its ground-level abstraction. There’s some freedom in how deep the abstraction goes—we could say a gear is atomic, or we could go all the way down to atoms—and the objects at the bottom will always be treated as black boxes. But I’d say it’s not quite right to think of the model as “partially black-box” just because the bottom-level objects are atomic; it’s usually the top-level breakdown that matters. E.g., in the maze example from the post, the top and bottom halves of the maze are still atomic black boxes, but our gearsy insight is still 100% gearsy—it is an insight which will not ever apply to some random black box in the wild.
Gears/no gears is a binary distinction; there’s a big qualitative jump between a black-box method which uses no information about internal system structure, and a gearsy model which uses any information about internal structure (even just very simple information). We can add more gears, reduce the black-box components in a gears level model. But as soon as we make the very first jump from one monolithic black box to two atomic gears, we’ve gone from a black-box method which applies to any random system, to a gears-level investment which will pay out on our particular system and systems related to it.
I think you’re pointing to a true and useful thing, but “sliding scale” isn’t quite the right way to characterize it. Rather, I’d say that we’re always operating at some level(s) of abstraction, and there’s always a lowest abstraction level in our model—a ground-level abstraction, in which the pieces are atomic. For a black-box method, the ground-level abstraction just has the one monolithic black box in it.
A gearsy method has more than just one object in its ground-level abstraction. There’s some freedom in how deep the abstraction goes—we could say a gear is atomic, or we could go all the way down to atoms—and the objects at the bottom will always be treated as black boxes. But I’d say it’s not quite right to think of the model as “partially black-box” just because the bottom-level objects are atomic; it’s usually the top-level breakdown that matters. E.g., in the maze example from the post, the top and bottom halves of the maze are still atomic black boxes, but our gearsy insight is still 100% gearsy—it is an insight which will not ever apply to some random black box in the wild.
Gears/no gears is a binary distinction; there’s a big qualitative jump between a black-box method which uses no information about internal system structure, and a gearsy model which uses any information about internal structure (even just very simple information). We can add more gears, reduce the black-box components in a gears level model. But as soon as we make the very first jump from one monolithic black box to two atomic gears, we’ve gone from a black-box method which applies to any random system, to a gears-level investment which will pay out on our particular system and systems related to it.