Yeah, I’m working towards that. Cheap examples would be like, hard sciences have made certain kinds of assumptions (reductionism, primacy of formal models etc) which have been extremely generative. There are lots of locally extremely helpful reifications, certainly various kinds of optimization criteria are very useful to generate strategies etc. A big part of my point is these should sort of always be taken as provisional.
Note that when you say “reification”, my mind replaces it with “model”, “map”, or “focus”. If you mean something else, the source of my confusion is clear.
“Focus” is the best among these but isn’t great. In most cases I think a lot of reifications are upstream of specific models, or generate them or something. Like, reification is the process (implicitly) of choosing weights for tradeoff calculations, but more generally for salience or priority etc. Maybe an ok example would be like, in economics we start trying to construct measures on the basis of which to determine etc etc., and even before these get goodharted we’ve already tried to collapse the complexity of the domain into a small number of factors to think about. We might even have destroyed a number of other important dimensions in this representation. This happens in some sense before rest of the model is constructed.
one source of my confusion may be the use of “reified” as a passive verb, which happens to ideas without specifying the actor.
This may just be sloppiness on my part. I usually mean something like, “a idea, as held by a person, which that person has reified.” Compare to eg. “a loved one” or something like that.
this feels like a simplistic model of what’s going on with learning an instrument. iirc in the “principles of SR” post from 20 years ago wozniak makes a point that you essentially can’t start doing SR until you’ve already learned an item, this being obviously for purely sort of “fact” based learning. SR doesn’t apply in the way you’ve described for all of the processes of tuning, efficiency, and accuracy gains that you need for learning an instrument. my sloppy model here is that formal practice eg for music is something like priming the system to spend optimization cycles on that etc—I assume cognitive scientists claim to have actual models here which I suppose are >50% fake lol.
also, separately, professional musicians in fact do a cheap SR for old repertoire, where they practice only intermittently to keep it in memory once it’s been established.