Under this model, then, Type 2 processing is a particular way of chaining together the outputs of various Type 1 subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents.
Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.
The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my “path to goal” generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It’s something of an indirect system, but the intentionality is there.
My visibility into the current intricacies of my pseudo-IFS is currently low due to the energy costs maintaining such visibility produces, and circumstances do not make regaining it feasible for a while. As a result, I find myself having some difficulty identifying any specific processes that are Type 2 that aren’t super implementation-specific and vague on the intricacies. I apologize for not having more helpful details on that front.
I have something a bit clearer as an example of what started as Type 2 behavior and transitioned to Type 1 behavior. I noticed at one point that I was calculating gradients in a timeframe that seemed automatic. Later investigation seemed to suggest that I had ended up with a Type 1 estimator that could handle a number of common data forms that I might want gradients of (it seems to resemble Riemann sums), and I have something of a felt sense for whether the form of data I’m looking at will mesh well with the estimator’s scope.
At least Type 2 behavior turning into Type 1 behavior is a pretty common thing in skill learning; the classic example I’ve heard cited is driving a car, which at first is very effortful and requires a lot of conscious thought, but then gradually things get so automated that you might not even remember most of your drive home. But the same thing can happen with pretty much any skill; at first it’s difficult and requires Type 2 processing, until it’s familiar enough to become effortless.
Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.
Have you encountered anything similar?
Probably, but this description is abstract enough that I have difficulty generating examples. Do you have a more concrete example?
The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my “path to goal” generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It’s something of an indirect system, but the intentionality is there.
My visibility into the current intricacies of my pseudo-IFS is currently low due to the energy costs maintaining such visibility produces, and circumstances do not make regaining it feasible for a while. As a result, I find myself having some difficulty identifying any specific processes that are Type 2 that aren’t super implementation-specific and vague on the intricacies. I apologize for not having more helpful details on that front.
I have something a bit clearer as an example of what started as Type 2 behavior and transitioned to Type 1 behavior. I noticed at one point that I was calculating gradients in a timeframe that seemed automatic. Later investigation seemed to suggest that I had ended up with a Type 1 estimator that could handle a number of common data forms that I might want gradients of (it seems to resemble Riemann sums), and I have something of a felt sense for whether the form of data I’m looking at will mesh well with the estimator’s scope.
At least Type 2 behavior turning into Type 1 behavior is a pretty common thing in skill learning; the classic example I’ve heard cited is driving a car, which at first is very effortful and requires a lot of conscious thought, but then gradually things get so automated that you might not even remember most of your drive home. But the same thing can happen with pretty much any skill; at first it’s difficult and requires Type 2 processing, until it’s familiar enough to become effortless.