Computer scientist
Fairly deep experience with self-programming and modification of intuition/reflexes
Personal jargon/nomenclature was developed in isolation and seldom matches what other people use
SilverFlame
[Question] Looking for a post I read if anyone recognizes it
The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my “path to goal” generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It’s something of an indirect system, but the intentionality is there.
My visibility into the current intricacies of my pseudo-IFS is currently low due to the energy costs maintaining such visibility produces, and circumstances do not make regaining it feasible for a while. As a result, I find myself having some difficulty identifying any specific processes that are Type 2 that aren’t super implementation-specific and vague on the intricacies. I apologize for not having more helpful details on that front.
I have something a bit clearer as an example of what started as Type 2 behavior and transitioned to Type 1 behavior. I noticed at one point that I was calculating gradients in a timeframe that seemed automatic. Later investigation seemed to suggest that I had ended up with a Type 1 estimator that could handle a number of common data forms that I might want gradients of (it seems to resemble Riemann sums), and I have something of a felt sense for whether the form of data I’m looking at will mesh well with the estimator’s scope.
I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:
General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
Complicated/technical problems also exaggerate the overhead costs of trying to harmonize thought and communication patterns amongst the team(s) due to reduced tolerance of failures
With these in mind, I would posit that a factor worth considering is that the traditional models of collaboration simply don’t meet the quality and cost requirements in their unmodified form. It is quite easy to picture a rationalist determining that the cost of forging new collaboration models isn’t worth the opportunity costs, especially if they aren’t actively on the front lines of some issue they consider Worth It.
Under this model, then, Type 2 processing is a particular way of chaining together the outputs of various Type 1 subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents.
Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.
Have you encountered anything similar?
Programming an IFS for alternate uses
I assign weights to terminal and instrumental value differently, with instrumental value growing higher for steps that are less removed from producing terminal value and/or for steps that won’t easily backslide/revert without maintenance.
As far as uncertainty goes, my general formula is to focus upon keeping plans composed of “sure bet” steps if the risk of failure is high, but I’ll allow less surefire steps to be attempted if there is more wiggle room in play. This sometimes results in plans that are overly circuitous, but resistant to common points of failure. The success rate of a step is calculated from my relevant experience and practice levels, as well as awareness of any relevant environmental factors. The actual weights were developed through iteration, and are likely specific to my framework.
Here’s a real example of a decision calculation, as requested:
Scenario: I’m driving home from work, and need to pick which restaurant to get dinner from.
Value Categories (a sampling):
Existing Desires: Is there anything I’m already in the mood for, or conversely something I’m not in the mood for?
Diminishing Returns: Have I chosen one or more of the options too recently, or has it been a while since I chose one of the options?
Travel Distance: Is it a short or long diversion from my route home to reach the restaurant(s)?
Price Tag: How pricey or cheap are the food options?
I don’t enjoy driving much, so Travel Distance is usually the highest-ranked Value Category, thoroughly eliminating food options that are too much of a deviation from my route. Next is Existing Desires, then Diminishing Returns, which let me pursue my desires and avoid getting overexposed to things. My finances are generally in a state where Price Tag doesn’t make much difference on location selection, but it will play a more noticeable role when it comes time to figure out my order.
That’s the one, thank you!