It’s likely that you also fight back harder, by raising your determination (i.e., the reference level for completing the novel), leading to a greater sense of error, sooner, and active conflict between controllers (aka ego depletion), as the countering interest also goes into greater error.
This sounds like a recipe for almost manic-depressive-like oscillations in behavior. What damps the cycle?
This sounds like a recipe for almost manic-depressive-like oscillations in behavior. What damps the cycle?
Maybe nothing. PCT suggests, however, that if your behavior leads to sustained chronic intrinsic error (e.g., you get stressed enough), your brain will begin “reorganize” (learn) to change your behavior (or more precisely, change the control structure generating the behavior) until the error goes away.
Unfortunately, because automatic reorganization is a fairly “dumb” optimization process (Powers proposes a simple pattern of “mutate and test until errors stop”), it is subject to some of the same biases as evolution. That is, the simplest solution will be chosen, not the most elegant one.
So, instead of elegantly negotiating suitable amounts of time for each goal, or finding a clever way to serve both at the same time, the simplest possible mutation that will fix the error in a case of conflict is for you to give up one of your goals.
Over time, this will then lead you to tend to give up more quickly, as your brain learns that changing your mind (“oh, it’s not really that important”) is a good way to stop the errors.
PCT proposes that intrinsic error (errors in perceptual signals whose definitions are hardwired by evolution) triggers a neural reorganization process, in which the brain tries out different reference levels and changes to inter-controller connections, for controllers involved in the error… that is, a mechanism for driving “trial and error” learning in the general case, but which can operate autonomously from conscious control. Powers proposes that this process can actually be quite random and still work, since the overall process has a selection step, driven by the overall levels of intrinsic error or conflict. IOW, learning is a control-driven optimization process using a meta-perception of errors in the primary control systems as its fitness function for optimizing.
Of course, this process can also be directed consciously, by thinking things through—and Powers suggests that directing the learning process is in fact the original function of consciousness, since the raw control structures themselves aren’t much more than a huge network of glorified thermostats.
So the main addition I’ve made to my training methods since grasping PCT, was to devise an algorithm for mapping out all the relevant control structure (using methods I already had for identifying subconscious predictive beliefs) in order to be able to get the big picture of the control conflicts in place before attempting to make changes, using the other methods I already had.
I already knew that subconscious predictive beliefs (“if this, then that”) played a major role in behavior, but PCT helped me realize that the “if” and “then” clauses actually refer to the controlled perceptual variables that each belief links.
That is, memory (belief) is used to store value-change relationships between controlled variables, whether they’re as trivial as “if I fall, I’ll hurt myself” or as abstract and Bruce-ish as, “if I go after what I want, I’m selfish and unlovable”.
So, by examining beliefs, one finds the linked variables (e.g. “fall” and “hurt”). And by hypothesizing changes to each newly-discovered variable, one finds relevant new beliefs. This process can then be iterated to draw a multi-level map of the salient portions of the control structure affecting one’s behavior in a particular area.
At this point, the work is still at a very early stage, but results so far seem promising. To be really sure of a substantial improvement, though, it’s going to take a few more weeks: in both my own case and in the case of clients trying the new method, the relevant behaviors have a cycle time of up to a month.
This sounds like a recipe for almost manic-depressive-like oscillations in behavior. What damps the cycle?
Maybe nothing. PCT suggests, however, that if your behavior leads to sustained chronic intrinsic error (e.g., you get stressed enough), your brain will begin “reorganize” (learn) to change your behavior (or more precisely, change the control structure generating the behavior) until the error goes away.
Unfortunately, because automatic reorganization is a fairly “dumb” optimization process (Powers proposes a simple pattern of “mutate and test until errors stop”), it is subject to some of the same biases as evolution. That is, the simplest solution will be chosen, not the most elegant one.
So, instead of elegantly negotiating suitable amounts of time for each goal, or finding a clever way to serve both at the same time, the simplest possible mutation that will fix the error in a case of conflict is for you to give up one of your goals.
Over time, this will then lead you to tend to give up more quickly, as your brain learns that changing your mind (“oh, it’s not really that important”) is a good way to stop the errors.
PCT proposes that intrinsic error (errors in perceptual signals whose definitions are hardwired by evolution) triggers a neural reorganization process, in which the brain tries out different reference levels and changes to inter-controller connections, for controllers involved in the error… that is, a mechanism for driving “trial and error” learning in the general case, but which can operate autonomously from conscious control. Powers proposes that this process can actually be quite random and still work, since the overall process has a selection step, driven by the overall levels of intrinsic error or conflict. IOW, learning is a control-driven optimization process using a meta-perception of errors in the primary control systems as its fitness function for optimizing.
Of course, this process can also be directed consciously, by thinking things through—and Powers suggests that directing the learning process is in fact the original function of consciousness, since the raw control structures themselves aren’t much more than a huge network of glorified thermostats.
So the main addition I’ve made to my training methods since grasping PCT, was to devise an algorithm for mapping out all the relevant control structure (using methods I already had for identifying subconscious predictive beliefs) in order to be able to get the big picture of the control conflicts in place before attempting to make changes, using the other methods I already had.
I already knew that subconscious predictive beliefs (“if this, then that”) played a major role in behavior, but PCT helped me realize that the “if” and “then” clauses actually refer to the controlled perceptual variables that each belief links.
That is, memory (belief) is used to store value-change relationships between controlled variables, whether they’re as trivial as “if I fall, I’ll hurt myself” or as abstract and Bruce-ish as, “if I go after what I want, I’m selfish and unlovable”.
So, by examining beliefs, one finds the linked variables (e.g. “fall” and “hurt”). And by hypothesizing changes to each newly-discovered variable, one finds relevant new beliefs. This process can then be iterated to draw a multi-level map of the salient portions of the control structure affecting one’s behavior in a particular area.
At this point, the work is still at a very early stage, but results so far seem promising. To be really sure of a substantial improvement, though, it’s going to take a few more weeks: in both my own case and in the case of clients trying the new method, the relevant behaviors have a cycle time of up to a month.