It seems you use the phrase “Minimal Control Principle” in a less strict sense than I do.
I didn’t use the phrase; you did. ;-) I was replying to the rest of the sentence.
our model of a cognitive process should have the least amount of “next step” representations.
I don’t get this. The idea of a “next step” is a bit more parsimonious than trying to describe the details of the feed-forward prediction hierarchy, even if it’s less technically accurate. But perhaps you have a different meaning for “next step” than I do. I just mean our brain’s prediction of “what happens next” in the world, which may be a self-referential and self-fulfilling prophecy.
These predictions are generated by a combination of external state information + internal state information (current goals). So for example, if one level of my current internal goal state is to walk across the room, and I’m currently sitting down, my prediction at one level of “what happens next” is the prediction that I’m going to stand up… which then cascades down to lower-level predictions triggering the motor actions to do so. Meanwhile, my internal state is still holding the subgoal of getting across the room.
We don’t necessarily have simultaneous awareness of all subgoal states, nor are most of these goals and subgoals consciously chosen. To initiate an action that runs counter to our active predictions, we have to change the entire subgoal stack, hence the time-and-patience part.
For example, if I’m not currently in the mood to work out, I can simply sit and visualize working out, and deflect all the objections that pop up in my head—objections that are essentially my brain’s prediction that, given my current state, I don’t want to exercise. I ignore them, and continue visualizing, until the objections are exhausted and all active subgoals are purged, at which point I suddenly notice that somehow or other, I’ve already wandered over to the treadmill and started it up while I was still busy visualizing.
(Note: YMMV if you try this; the actual technique has enough potential “gotcha” details that I’m teaching a live workshop today for my group so there can be practice and Q&A to handle them.)
This means there is indeed little central control in cognition.
That depends on what you mean by “central” and “control”. ;-) If you mean conscious central control, then yeah, there’s pretty much none. The goal stack (heap?) isn’t really under conscious control. it just seems that way because consciousness is triggered by exceptional conditions in the control flow. It’s like consciousness is a quality control consultant who thinks he’s in charge of running the business, because everyone comes to him with their problems. ;-)
That being said, consciousness seems to have a little bit of goal stack of its own, the ability to direct the attention as a whole, and the ability to suppress actions. These three abilities, when combined, make it possible for us to influence the rest of the system.
(This isn’t something I’ve focused my work on much in the past; my main focus has been changing the memories that drive the contents of the goal stack—i.e., what predictions will occur in what internal+external state combinations. While it’s still a conscious intervention, it amounts to changing the programming in advance, rather than making changes “while the program is running”. I’ve branched out into this area partially as a result of LW discussions, and partly because I’ve gotten to the point where the constraint on my productivity is no longer my old bad programming, but the lack of new good programming.)
The problem at this point seems that we don’t have a vocabulary connecting folk conceptions of productivity with cognitive science.
In this particular context, hypnosis and LoA have vocabulary and practice regarding monoidealism, going back about 100 years. Unfortunately, both fields also have a huge amount of utter nonsense theories floating around.
Until that is worked out we can only have superficial discussions over tricks. We can only do some experimental therapy on akrisia now.
I don’t see the discussion as superficial, but then that’s because I’m viewing this stuff as extensions to my basic model of behavior as prediction-driven. (See the wikipedia page on the memory-prediction framework for the cog sci part.)
My work has been focused on changing memories (via Reconsolidation approaches) to change the predictions, and thus, the behavior. The somatic marker hypothesis also plays a role, in verifying successful updates to cached predictions.
I don’t talk about the details of these things much in my work, because it is not essential to know how they work in order to make successful behavior changes. To date I’ve focused primarily on disruption of dysfunctional predictions, rather than the creation/addition of useful ones, but that has started changing this year.
How we use this to fight akrasia is a different question.
Actually, “fighting” akrasia on an ongoing basis is a dumb idea, based on a confusion about what it is. It’s more efficient in the long haul to change your stored predictions, than to spend all your time working around them.
del
I didn’t use the phrase; you did. ;-) I was replying to the rest of the sentence.
I don’t get this. The idea of a “next step” is a bit more parsimonious than trying to describe the details of the feed-forward prediction hierarchy, even if it’s less technically accurate. But perhaps you have a different meaning for “next step” than I do. I just mean our brain’s prediction of “what happens next” in the world, which may be a self-referential and self-fulfilling prophecy.
These predictions are generated by a combination of external state information + internal state information (current goals). So for example, if one level of my current internal goal state is to walk across the room, and I’m currently sitting down, my prediction at one level of “what happens next” is the prediction that I’m going to stand up… which then cascades down to lower-level predictions triggering the motor actions to do so. Meanwhile, my internal state is still holding the subgoal of getting across the room.
We don’t necessarily have simultaneous awareness of all subgoal states, nor are most of these goals and subgoals consciously chosen. To initiate an action that runs counter to our active predictions, we have to change the entire subgoal stack, hence the time-and-patience part.
For example, if I’m not currently in the mood to work out, I can simply sit and visualize working out, and deflect all the objections that pop up in my head—objections that are essentially my brain’s prediction that, given my current state, I don’t want to exercise. I ignore them, and continue visualizing, until the objections are exhausted and all active subgoals are purged, at which point I suddenly notice that somehow or other, I’ve already wandered over to the treadmill and started it up while I was still busy visualizing.
(Note: YMMV if you try this; the actual technique has enough potential “gotcha” details that I’m teaching a live workshop today for my group so there can be practice and Q&A to handle them.)
That depends on what you mean by “central” and “control”. ;-) If you mean conscious central control, then yeah, there’s pretty much none. The goal stack (heap?) isn’t really under conscious control. it just seems that way because consciousness is triggered by exceptional conditions in the control flow. It’s like consciousness is a quality control consultant who thinks he’s in charge of running the business, because everyone comes to him with their problems. ;-)
That being said, consciousness seems to have a little bit of goal stack of its own, the ability to direct the attention as a whole, and the ability to suppress actions. These three abilities, when combined, make it possible for us to influence the rest of the system.
(This isn’t something I’ve focused my work on much in the past; my main focus has been changing the memories that drive the contents of the goal stack—i.e., what predictions will occur in what internal+external state combinations. While it’s still a conscious intervention, it amounts to changing the programming in advance, rather than making changes “while the program is running”. I’ve branched out into this area partially as a result of LW discussions, and partly because I’ve gotten to the point where the constraint on my productivity is no longer my old bad programming, but the lack of new good programming.)
In this particular context, hypnosis and LoA have vocabulary and practice regarding monoidealism, going back about 100 years. Unfortunately, both fields also have a huge amount of utter nonsense theories floating around.
I don’t see the discussion as superficial, but then that’s because I’m viewing this stuff as extensions to my basic model of behavior as prediction-driven. (See the wikipedia page on the memory-prediction framework for the cog sci part.)
My work has been focused on changing memories (via Reconsolidation approaches) to change the predictions, and thus, the behavior. The somatic marker hypothesis also plays a role, in verifying successful updates to cached predictions.
I don’t talk about the details of these things much in my work, because it is not essential to know how they work in order to make successful behavior changes. To date I’ve focused primarily on disruption of dysfunctional predictions, rather than the creation/addition of useful ones, but that has started changing this year.
Actually, “fighting” akrasia on an ongoing basis is a dumb idea, based on a confusion about what it is. It’s more efficient in the long haul to change your stored predictions, than to spend all your time working around them.