I basically agree with this if we’re viewing this post as a standalone. I only had so much space to recursively unpack things, and I figure that the claim will make more sense if people go read a few of the posts on gears-level models and then think for themselves a bit about how what gears-level models look like for questions like “why does modularity show up in evolved/trained systems?”.
When I say “same rough shape as a proof”, I don’t necessarily mean any reasonable-sounding argument; the key is that we want arguments with enough precision that we can map out the boundaries of their necessary conditions, and enough internal structure to adapt them to particular situations or new models without having to start over from scratch. In short, it’s about the ability to tell exactly when the argument applies, and to apply the argument in many ways and in many places.
I basically agree with this if we’re viewing this post as a standalone. I only had so much space to recursively unpack things, and I figure that the claim will make more sense if people go read a few of the posts on gears-level models and then think for themselves a bit about how what gears-level models look like for questions like “why does modularity show up in evolved/trained systems?”.
When I say “same rough shape as a proof”, I don’t necessarily mean any reasonable-sounding argument; the key is that we want arguments with enough precision that we can map out the boundaries of their necessary conditions, and enough internal structure to adapt them to particular situations or new models without having to start over from scratch. In short, it’s about the ability to tell exactly when the argument applies, and to apply the argument in many ways and in many places.