To add on to Satvik’s comment about black boxes approaches—I often think black-box heuristic models are themselves capital investments.
For instance, knowing a few of the most basic games in game theory (Prisoners Dillema, Staghunt, BoS, etc.) is not actually a very good gears-level model of how humans are making decisions in any given situation. However, using it as one black box model (among many that can help you predict the situation) is much more generalizable then trying to figure out the specifics of any given situation—understanding game theory here is a good capital investment in understanding people, even though its’ not a gears level model of any specific situation).
I think which one you use depends on your strategy for success—One path to success is specialization, and having a very good gears level model of a single domain. Another path to success is being able to combine many black box models to work cross domain—perhaps the best strategy is a combination of a few gears level models, and many black box models, to be the Pareto Best in the World.
I would characterize game theory, as applied to human interactions, as a gearsy model. It’s not a very high-fidelity model—a good analogy would be “spherical cow in a vacuum” or “sled on a perfectly smooth frictionless incline” in physics. And the components in game-theory models—the agents—are themselves black boxes which are really resistant to being broken open. But a model with multiple agents in it is not a single monolithic black box, therefore it’s a gears-level model.
This is similar to my response to Kaj above: there’s a qualitative change in going from a model which treats the entire system as a single monolithic black box, to a model which contains any internal structure at all. As soon as we have any internal structure, the model will no longer apply to any random system in the wild—it will only apply to systems which share the relevant gears. In the case of game theory, our game-theoretic models are only relevant to systems with interacting agenty things; it won’t help us to e.g. design a calculator or find a short path through a maze. Those agenty things are the gears.
As in any gears-level model, the gears themselves can be black boxes, and that’s definitely the case for agents in game theory.
I think that talking about how “Gearsy” a model is makes a lot of sense. The deeper you go into defining subsystems, the less gears the model has.
I think the type of “combined model” I’m talking about here feels super blackboxy. As soon as I say “I’m going to take a game theoretic model, a loyalty based model, and an outside view “what happened previously” model, and average them (without any idea how they fit together), it feels more blackboxy to me even though the gears of the black box are more gearsy.
The benefit of this type of model is of course that you can develop more gears that show how the models relate to each other over time.
Edit:
I’m curious if you agree with the conception of gears being capital investments towards specific expertise, and black boxes being capital investments towards generalizable advantage.
I definitely agree that combining models—especially by averaging them in some way—is very blackboxy. The individual models being averaged can each be gears-level models, though.
Circling back to my main definition: it’s the top-level division which makes a model gearsy/non-gearsy. If the top-level is averaging a bunch of stuff, then that’s a black-box model, even if it’s using some gears-level models internally. If the top-level division contains gears, then that’s a gears-level model, even if the gears themselves are black boxes. (Alternatively, we could say that “gears” vs “black box” is a characterization of each level/component of the model, rather than a characterization of the model as a whole.)
I’m curious if you agree with the conception of gears being capital investments towards specific expertise, and black boxes being capital investments towards generalizable advantage.
I don’t think black boxes are capital investments towards generalizable advantage. Black box methods are generalizable, in the sense that they work on basically any system. But individual black-box models are not generalizable—a black-box method needs to build a new model whenever the system changes. That’s why black-box methods don’t involve an investment—when a black-box method encounters a new problem/system, it starts from scratch. Something like “learn how to do A/B tests” is an investment in learning how to apply a black-box method, but the A/B tests themselves are not an investment (or to the extent they are, they’re an investment which depreciates very quickly) - they won’t pay off over a very long time horizon.
So learninghow to apply a black-box method, in general, is a capital investment towards generalizable advantage. But actually using a black-box method—i.e. producing a black-box model—is usually not a capital investment.
(BTW, learning how to produce gears-level models is a capital investment which makes it cheaper to produce future capital investments.)
To add on to Satvik’s comment about black boxes approaches—I often think black-box heuristic models are themselves capital investments.
For instance, knowing a few of the most basic games in game theory (Prisoners Dillema, Staghunt, BoS, etc.) is not actually a very good gears-level model of how humans are making decisions in any given situation. However, using it as one black box model (among many that can help you predict the situation) is much more generalizable then trying to figure out the specifics of any given situation—understanding game theory here is a good capital investment in understanding people, even though its’ not a gears level model of any specific situation).
I think which one you use depends on your strategy for success—One path to success is specialization, and having a very good gears level model of a single domain. Another path to success is being able to combine many black box models to work cross domain—perhaps the best strategy is a combination of a few gears level models, and many black box models, to be the Pareto Best in the World.
I would characterize game theory, as applied to human interactions, as a gearsy model. It’s not a very high-fidelity model—a good analogy would be “spherical cow in a vacuum” or “sled on a perfectly smooth frictionless incline” in physics. And the components in game-theory models—the agents—are themselves black boxes which are really resistant to being broken open. But a model with multiple agents in it is not a single monolithic black box, therefore it’s a gears-level model.
This is similar to my response to Kaj above: there’s a qualitative change in going from a model which treats the entire system as a single monolithic black box, to a model which contains any internal structure at all. As soon as we have any internal structure, the model will no longer apply to any random system in the wild—it will only apply to systems which share the relevant gears. In the case of game theory, our game-theoretic models are only relevant to systems with interacting agenty things; it won’t help us to e.g. design a calculator or find a short path through a maze. Those agenty things are the gears.
As in any gears-level model, the gears themselves can be black boxes, and that’s definitely the case for agents in game theory.
I think that talking about how “Gearsy” a model is makes a lot of sense. The deeper you go into defining subsystems, the less gears the model has.
I think the type of “combined model” I’m talking about here feels super blackboxy. As soon as I say “I’m going to take a game theoretic model, a loyalty based model, and an outside view “what happened previously” model, and average them (without any idea how they fit together), it feels more blackboxy to me even though the gears of the black box are more gearsy.
The benefit of this type of model is of course that you can develop more gears that show how the models relate to each other over time.
Edit:
I’m curious if you agree with the conception of gears being capital investments towards specific expertise, and black boxes being capital investments towards generalizable advantage.
I definitely agree that combining models—especially by averaging them in some way—is very blackboxy. The individual models being averaged can each be gears-level models, though.
Circling back to my main definition: it’s the top-level division which makes a model gearsy/non-gearsy. If the top-level is averaging a bunch of stuff, then that’s a black-box model, even if it’s using some gears-level models internally. If the top-level division contains gears, then that’s a gears-level model, even if the gears themselves are black boxes. (Alternatively, we could say that “gears” vs “black box” is a characterization of each level/component of the model, rather than a characterization of the model as a whole.)
I don’t think black boxes are capital investments towards generalizable advantage. Black box methods are generalizable, in the sense that they work on basically any system. But individual black-box models are not generalizable—a black-box method needs to build a new model whenever the system changes. That’s why black-box methods don’t involve an investment—when a black-box method encounters a new problem/system, it starts from scratch. Something like “learn how to do A/B tests” is an investment in learning how to apply a black-box method, but the A/B tests themselves are not an investment (or to the extent they are, they’re an investment which depreciates very quickly) - they won’t pay off over a very long time horizon.
So learning how to apply a black-box method, in general, is a capital investment towards generalizable advantage. But actually using a black-box method—i.e. producing a black-box model—is usually not a capital investment.
(BTW, learning how to produce gears-level models is a capital investment which makes it cheaper to produce future capital investments.)