[excellent, odds ratio 3:2 for worth checking LW2.0 sometimes and 4:3 for LW2.0 will succeed]
I think “Determinism and Reconstructability” are great concepts but you picked terrible names for them, and I’ll probably call them “gears” and “meta-gears” or something short like that.
This article made me realize that my cognition runs on something equivalent to logical inductors, and what I recently wrote on Be Well Tuned about cognitive strategies is a reasonable attempt at explaining how to implement logical inductors in a human brain.
Thank you! I’m glad to contribute to those odds ratios.
I neglected to optimize those names, yeah. But “gears” v “meta-gears”? I think the two things together make what people call “gears”, so it should be more like “gears inside” v “gears outside” (maybe “object-leves gears” v “meta-level gears”), so that you can say both are necessary for good gears-level models.
I think it’s perfectly valid to informally say “gears” while meaning both “gears” (how clear a model is on what it predicts) and “meta-gears” (how clear the meta model is on which models it a priori expects to be correct). And the new clarity you bring to this would probably be the right time to re-draw the boundaries around gears-ness, to make it match the structure of reality better. But this is just a suggestion.
Maybe so. I’m also tempted to call meta-gears “policy-level gears” to echo my earlier terminology post, but it seems a bit confusing. Definitely would be nice to have better terminology for it all.
[excellent, odds ratio 3:2 for worth checking LW2.0 sometimes and 4:3 for LW2.0 will succeed]
I think “Determinism and Reconstructability” are great concepts but you picked terrible names for them, and I’ll probably call them “gears” and “meta-gears” or something short like that.
This article made me realize that my cognition runs on something equivalent to logical inductors, and what I recently wrote on Be Well Tuned about cognitive strategies is a reasonable attempt at explaining how to implement logical inductors in a human brain.
Thank you! I’m glad to contribute to those odds ratios.
I neglected to optimize those names, yeah. But “gears” v “meta-gears”? I think the two things together make what people call “gears”, so it should be more like “gears inside” v “gears outside” (maybe “object-leves gears” v “meta-level gears”), so that you can say both are necessary for good gears-level models.
I hadn’t seen Be Well Tuned!
I think it’s perfectly valid to informally say “gears” while meaning both “gears” (how clear a model is on what it predicts) and “meta-gears” (how clear the meta model is on which models it a priori expects to be correct). And the new clarity you bring to this would probably be the right time to re-draw the boundaries around gears-ness, to make it match the structure of reality better. But this is just a suggestion.
Maybe so. I’m also tempted to call meta-gears “policy-level gears” to echo my earlier terminology post, but it seems a bit confusing. Definitely would be nice to have better terminology for it all.