A gears-level model is ‘well-constrained’ in the sense that there is a strong connection between each of the things you observe—it would be hard for you to imagine that one of the variables could be different while all of the others remained the same.
Related Tags: Anticipated Experiences, Double-Crux, Empiricism, Falsifiability, Map and Territory
The term gears-level was first described on LW in the post “Gears in Understanding”:
This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don’t know if this list is exhaustive and would be a little surprised if it were:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?
An example from Gears in Understanding of a gears-level model is (surprise) a box of gears. If you can see a series of interlocked gears, alternately turning clockwise, then counterclockwise, and so on, then you’re able to anticipate the direction of any given, even if you cannot see it. It would be very difficult to imagine all of the gears turning as they are but only one of them changing direction whilst remaining interlocked. And finally, you would be able to rederive the direction of any given gear if you forgot it.
Note that the author of Gears in Understanding, Valentine, was careful to point out that these tests do not fully define the property ‘gears-level’, and that “Gears-ness is not the same as goodness”—there are other things that are valuable in a model, and many things cannot practically be modelled in this fashion. If you intend to use the term it is highly recommended you read the post beforehand, as the concept is not easily defined.