I think that talking about how “Gearsy” a model is makes a lot of sense. The deeper you go into defining subsystems, the less gears the model has.
I think the type of “combined model” I’m talking about here feels super blackboxy. As soon as I say “I’m going to take a game theoretic model, a loyalty based model, and an outside view “what happened previously” model, and average them (without any idea how they fit together), it feels more blackboxy to me even though the gears of the black box are more gearsy.
The benefit of this type of model is of course that you can develop more gears that show how the models relate to each other over time.
Edit:
I’m curious if you agree with the conception of gears being capital investments towards specific expertise, and black boxes being capital investments towards generalizable advantage.
I definitely agree that combining models—especially by averaging them in some way—is very blackboxy. The individual models being averaged can each be gears-level models, though.
Circling back to my main definition: it’s the top-level division which makes a model gearsy/non-gearsy. If the top-level is averaging a bunch of stuff, then that’s a black-box model, even if it’s using some gears-level models internally. If the top-level division contains gears, then that’s a gears-level model, even if the gears themselves are black boxes. (Alternatively, we could say that “gears” vs “black box” is a characterization of each level/component of the model, rather than a characterization of the model as a whole.)
I’m curious if you agree with the conception of gears being capital investments towards specific expertise, and black boxes being capital investments towards generalizable advantage.
I don’t think black boxes are capital investments towards generalizable advantage. Black box methods are generalizable, in the sense that they work on basically any system. But individual black-box models are not generalizable—a black-box method needs to build a new model whenever the system changes. That’s why black-box methods don’t involve an investment—when a black-box method encounters a new problem/system, it starts from scratch. Something like “learn how to do A/B tests” is an investment in learning how to apply a black-box method, but the A/B tests themselves are not an investment (or to the extent they are, they’re an investment which depreciates very quickly) - they won’t pay off over a very long time horizon.
So learninghow to apply a black-box method, in general, is a capital investment towards generalizable advantage. But actually using a black-box method—i.e. producing a black-box model—is usually not a capital investment.
(BTW, learning how to produce gears-level models is a capital investment which makes it cheaper to produce future capital investments.)
I think that talking about how “Gearsy” a model is makes a lot of sense. The deeper you go into defining subsystems, the less gears the model has.
I think the type of “combined model” I’m talking about here feels super blackboxy. As soon as I say “I’m going to take a game theoretic model, a loyalty based model, and an outside view “what happened previously” model, and average them (without any idea how they fit together), it feels more blackboxy to me even though the gears of the black box are more gearsy.
The benefit of this type of model is of course that you can develop more gears that show how the models relate to each other over time.
Edit:
I’m curious if you agree with the conception of gears being capital investments towards specific expertise, and black boxes being capital investments towards generalizable advantage.
I definitely agree that combining models—especially by averaging them in some way—is very blackboxy. The individual models being averaged can each be gears-level models, though.
Circling back to my main definition: it’s the top-level division which makes a model gearsy/non-gearsy. If the top-level is averaging a bunch of stuff, then that’s a black-box model, even if it’s using some gears-level models internally. If the top-level division contains gears, then that’s a gears-level model, even if the gears themselves are black boxes. (Alternatively, we could say that “gears” vs “black box” is a characterization of each level/component of the model, rather than a characterization of the model as a whole.)
I don’t think black boxes are capital investments towards generalizable advantage. Black box methods are generalizable, in the sense that they work on basically any system. But individual black-box models are not generalizable—a black-box method needs to build a new model whenever the system changes. That’s why black-box methods don’t involve an investment—when a black-box method encounters a new problem/system, it starts from scratch. Something like “learn how to do A/B tests” is an investment in learning how to apply a black-box method, but the A/B tests themselves are not an investment (or to the extent they are, they’re an investment which depreciates very quickly) - they won’t pay off over a very long time horizon.
So learning how to apply a black-box method, in general, is a capital investment towards generalizable advantage. But actually using a black-box method—i.e. producing a black-box model—is usually not a capital investment.
(BTW, learning how to produce gears-level models is a capital investment which makes it cheaper to produce future capital investments.)