Does anyone have any other ideas for trivial impetuses that could be helpful in fighting small-scale akrasia (or large-scale)?
In my experience, all large-scale akratic dilemmas can be reduced to small-scale problems. Large-scale akrasia in not writing your novel is small-scale akrasia of not writing a chapter this week. Large-scale akrasia of losing weight is small-scale akrasia of not bothering to eat well today. Akrasia is the reduction of its parts, not the sum.
In my experience, all large-scale akratic dilemmas can be reduced to small-scale problems. Large-scale akrasia in not writing your novel is small-scale akrasia of not writing a chapter this week. Large-scale akrasia of losing weight is small-scale akrasia of not bothering to eat well today. Akrasia is the reduction of its parts, not the sum.
Then you’re either very lucky, or you’ve misinterpeted your experience. People can be perfectly capable of writing one chapter this week, and then giving up on the whole thing the next week. The apparent reduction in this case is an illusion, because you can do a thing once and yet not be able to do it, in general.
It seems like you should be able to simply repeat the experience the following week, but it doesn’t actually work that way in practice for most people who have problems with procrastination.
The thing that PCT added to my repertoire in this area was an explanation of why this phenomenon occurs. Specifically, perceptual variables are measured over differing time periods, with “higher” (i.e. controlling) levels being averaged over longer periods. So, for example, if you have a valued variable like “spending time with my kids” or “having time to myself” that’s perceptually averaged over a multiweek period, the first week you work on your novel probably won’t make much of a dent in that measurement.
By the second week, though, the measured error is going to start putting you into conflict and “reorganization”, during which you will suddenly realize that gosh, that novel isn’t really all that important and you could work on it tomorrow...
In some respects, this model is even simpler than Ainslie’s appetites-and-hyperbolic discounting model of competing “interests”. The interests are still there in PCT, but the activation of an interest is based on its degree of error—i.e., generating the “appetite” for more time for yourself or whatever. Thus, an interest can thereby seem to build up strength over time, and displace another interest that was previously ascendant.
Once your behavior changes, the error falls off on that interest, but the perceived average of your original interest (writing the novel) begins to fall out of its desired range. Soon, you’re determined to write again… and the loop begins again.
That, of course, is the mild version. It’s likely that you also fight back harder, by raising your determination (i.e., the reference level for completing the novel), leading to a greater sense of error, sooner, and active conflict between controllers (aka ego depletion), as the countering interest also goes into greater error.
In effect, the harder you try, the harder you fail, as the systems in conflict push back at you.
Whew. Can you tell I’ve had some experience with this sort of thing? ;-) Anyway, long story short: akrasic reduction is an illusion, because akrasia can result from conflicts between perceptual variables measured over different time frames. Thus, you can be capable of doing something at one time scale without conflict, but not at another, without inducing ego depletion.
This creates the all-too-common experience of discovering anti-akrasia hack #57, and having it work great for a little while, before it mysteriously stops working, or you simply stop using it. It’s not meta-akrasia; it’s just predictive akrasia. (I.e., if you know it’s going to work, and your “real” goal at the moment is to lose, then you will find a way not to do it.)
It’s likely that you also fight back harder, by raising your determination (i.e., the reference level for completing the novel), leading to a greater sense of error, sooner, and active conflict between controllers (aka ego depletion), as the countering interest also goes into greater error.
This sounds like a recipe for almost manic-depressive-like oscillations in behavior. What damps the cycle?
This sounds like a recipe for almost manic-depressive-like oscillations in behavior. What damps the cycle?
Maybe nothing. PCT suggests, however, that if your behavior leads to sustained chronic intrinsic error (e.g., you get stressed enough), your brain will begin “reorganize” (learn) to change your behavior (or more precisely, change the control structure generating the behavior) until the error goes away.
Unfortunately, because automatic reorganization is a fairly “dumb” optimization process (Powers proposes a simple pattern of “mutate and test until errors stop”), it is subject to some of the same biases as evolution. That is, the simplest solution will be chosen, not the most elegant one.
So, instead of elegantly negotiating suitable amounts of time for each goal, or finding a clever way to serve both at the same time, the simplest possible mutation that will fix the error in a case of conflict is for you to give up one of your goals.
Over time, this will then lead you to tend to give up more quickly, as your brain learns that changing your mind (“oh, it’s not really that important”) is a good way to stop the errors.
PCT proposes that intrinsic error (errors in perceptual signals whose definitions are hardwired by evolution) triggers a neural reorganization process, in which the brain tries out different reference levels and changes to inter-controller connections, for controllers involved in the error… that is, a mechanism for driving “trial and error” learning in the general case, but which can operate autonomously from conscious control. Powers proposes that this process can actually be quite random and still work, since the overall process has a selection step, driven by the overall levels of intrinsic error or conflict. IOW, learning is a control-driven optimization process using a meta-perception of errors in the primary control systems as its fitness function for optimizing.
Of course, this process can also be directed consciously, by thinking things through—and Powers suggests that directing the learning process is in fact the original function of consciousness, since the raw control structures themselves aren’t much more than a huge network of glorified thermostats.
So the main addition I’ve made to my training methods since grasping PCT, was to devise an algorithm for mapping out all the relevant control structure (using methods I already had for identifying subconscious predictive beliefs) in order to be able to get the big picture of the control conflicts in place before attempting to make changes, using the other methods I already had.
I already knew that subconscious predictive beliefs (“if this, then that”) played a major role in behavior, but PCT helped me realize that the “if” and “then” clauses actually refer to the controlled perceptual variables that each belief links.
That is, memory (belief) is used to store value-change relationships between controlled variables, whether they’re as trivial as “if I fall, I’ll hurt myself” or as abstract and Bruce-ish as, “if I go after what I want, I’m selfish and unlovable”.
So, by examining beliefs, one finds the linked variables (e.g. “fall” and “hurt”). And by hypothesizing changes to each newly-discovered variable, one finds relevant new beliefs. This process can then be iterated to draw a multi-level map of the salient portions of the control structure affecting one’s behavior in a particular area.
At this point, the work is still at a very early stage, but results so far seem promising. To be really sure of a substantial improvement, though, it’s going to take a few more weeks: in both my own case and in the case of clients trying the new method, the relevant behaviors have a cycle time of up to a month.
In my experience, all large-scale akratic dilemmas can be reduced to small-scale problems. Large-scale akrasia in not writing your novel is small-scale akrasia of not writing a chapter this week. Large-scale akrasia of losing weight is small-scale akrasia of not bothering to eat well today. Akrasia is the reduction of its parts, not the sum.
Then you’re either very lucky, or you’ve misinterpeted your experience. People can be perfectly capable of writing one chapter this week, and then giving up on the whole thing the next week. The apparent reduction in this case is an illusion, because you can do a thing once and yet not be able to do it, in general.
It seems like you should be able to simply repeat the experience the following week, but it doesn’t actually work that way in practice for most people who have problems with procrastination.
The thing that PCT added to my repertoire in this area was an explanation of why this phenomenon occurs. Specifically, perceptual variables are measured over differing time periods, with “higher” (i.e. controlling) levels being averaged over longer periods. So, for example, if you have a valued variable like “spending time with my kids” or “having time to myself” that’s perceptually averaged over a multiweek period, the first week you work on your novel probably won’t make much of a dent in that measurement.
By the second week, though, the measured error is going to start putting you into conflict and “reorganization”, during which you will suddenly realize that gosh, that novel isn’t really all that important and you could work on it tomorrow...
In some respects, this model is even simpler than Ainslie’s appetites-and-hyperbolic discounting model of competing “interests”. The interests are still there in PCT, but the activation of an interest is based on its degree of error—i.e., generating the “appetite” for more time for yourself or whatever. Thus, an interest can thereby seem to build up strength over time, and displace another interest that was previously ascendant.
Once your behavior changes, the error falls off on that interest, but the perceived average of your original interest (writing the novel) begins to fall out of its desired range. Soon, you’re determined to write again… and the loop begins again.
That, of course, is the mild version. It’s likely that you also fight back harder, by raising your determination (i.e., the reference level for completing the novel), leading to a greater sense of error, sooner, and active conflict between controllers (aka ego depletion), as the countering interest also goes into greater error.
In effect, the harder you try, the harder you fail, as the systems in conflict push back at you.
Whew. Can you tell I’ve had some experience with this sort of thing? ;-) Anyway, long story short: akrasic reduction is an illusion, because akrasia can result from conflicts between perceptual variables measured over different time frames. Thus, you can be capable of doing something at one time scale without conflict, but not at another, without inducing ego depletion.
This creates the all-too-common experience of discovering anti-akrasia hack #57, and having it work great for a little while, before it mysteriously stops working, or you simply stop using it. It’s not meta-akrasia; it’s just predictive akrasia. (I.e., if you know it’s going to work, and your “real” goal at the moment is to lose, then you will find a way not to do it.)
This sounds like a recipe for almost manic-depressive-like oscillations in behavior. What damps the cycle?
Maybe nothing. PCT suggests, however, that if your behavior leads to sustained chronic intrinsic error (e.g., you get stressed enough), your brain will begin “reorganize” (learn) to change your behavior (or more precisely, change the control structure generating the behavior) until the error goes away.
Unfortunately, because automatic reorganization is a fairly “dumb” optimization process (Powers proposes a simple pattern of “mutate and test until errors stop”), it is subject to some of the same biases as evolution. That is, the simplest solution will be chosen, not the most elegant one.
So, instead of elegantly negotiating suitable amounts of time for each goal, or finding a clever way to serve both at the same time, the simplest possible mutation that will fix the error in a case of conflict is for you to give up one of your goals.
Over time, this will then lead you to tend to give up more quickly, as your brain learns that changing your mind (“oh, it’s not really that important”) is a good way to stop the errors.
PCT proposes that intrinsic error (errors in perceptual signals whose definitions are hardwired by evolution) triggers a neural reorganization process, in which the brain tries out different reference levels and changes to inter-controller connections, for controllers involved in the error… that is, a mechanism for driving “trial and error” learning in the general case, but which can operate autonomously from conscious control. Powers proposes that this process can actually be quite random and still work, since the overall process has a selection step, driven by the overall levels of intrinsic error or conflict. IOW, learning is a control-driven optimization process using a meta-perception of errors in the primary control systems as its fitness function for optimizing.
Of course, this process can also be directed consciously, by thinking things through—and Powers suggests that directing the learning process is in fact the original function of consciousness, since the raw control structures themselves aren’t much more than a huge network of glorified thermostats.
So the main addition I’ve made to my training methods since grasping PCT, was to devise an algorithm for mapping out all the relevant control structure (using methods I already had for identifying subconscious predictive beliefs) in order to be able to get the big picture of the control conflicts in place before attempting to make changes, using the other methods I already had.
I already knew that subconscious predictive beliefs (“if this, then that”) played a major role in behavior, but PCT helped me realize that the “if” and “then” clauses actually refer to the controlled perceptual variables that each belief links.
That is, memory (belief) is used to store value-change relationships between controlled variables, whether they’re as trivial as “if I fall, I’ll hurt myself” or as abstract and Bruce-ish as, “if I go after what I want, I’m selfish and unlovable”.
So, by examining beliefs, one finds the linked variables (e.g. “fall” and “hurt”). And by hypothesizing changes to each newly-discovered variable, one finds relevant new beliefs. This process can then be iterated to draw a multi-level map of the salient portions of the control structure affecting one’s behavior in a particular area.
At this point, the work is still at a very early stage, but results so far seem promising. To be really sure of a substantial improvement, though, it’s going to take a few more weeks: in both my own case and in the case of clients trying the new method, the relevant behaviors have a cycle time of up to a month.
PCT?
Perceptual Control Theory, an approach to the study of living organisms developed by William Powers.
I introduced the subject to LW here. Pjeby has enthusiatically taken to it.
Other links here, here, here.
See also: http://wiki.lesswrong.com/wiki/Control_theory