A dimension I like, is the dimension of how much a model bears “long” chains of inference. (Metaphorically long, not necessarily many steps.) Can I tell you the model, and then ask you what the model says about X, and you don’t immediately see it, but then I tell you an argument that the model makes, and you can then see for yourself that the model says that? Then that’s a gears-level model.
Gears-level models make surprising predictions from apparently unsurprising elements. E.g. a model that says “there’s some gears in the box, connected in series by meshing teeth” sounds sort of anodyne, but using inference, you can get a precise non-obvious prediction out of the model: turning the left gear Z-wise makes the right gear turn counter-Z-wise, and vice versa.
A dimension I like, is the dimension of how much a model bears “long” chains of inference. (Metaphorically long, not necessarily many steps.) Can I tell you the model, and then ask you what the model says about X, and you don’t immediately see it, but then I tell you an argument that the model makes, and you can then see for yourself that the model says that? Then that’s a gears-level model.
Gears-level models make surprising predictions from apparently unsurprising elements. E.g. a model that says “there’s some gears in the box, connected in series by meshing teeth” sounds sort of anodyne, but using inference, you can get a precise non-obvious prediction out of the model: turning the left gear Z-wise makes the right gear turn counter-Z-wise, and vice versa.