It’s not the language of “wanting” that seems to be the problem; after all, you yourself talk about “what I really want in the first place”. I think, rather, you’re suggesting Roko amends that quote from “the things we want to do” to “the things we think we want to do” or “the things we think we should want to do”.
Yes, but that wasn’t my main objection. My main objection is that “trying to prevent us from doing the things we want to do” implies an opponent whose goal is to frustrate you, rather than a blind controller simply trying to restore a variable to its programmed range.
It’s not “out to get you” in some fashion, and far too much self-help material creates that kind of paranoia already. Certainly, I don’t want anybody getting the impression that I promote such irrational paranoia myself.
Now it’s my turn to be puzzled about whether we’re disagreeing. Isn’t this quite compatible with what I wrote in the second paragraph?
In any case, IAWYC, but I haven’t yet seen evidence that would lead me to conclude all of my unconscious mental processes are best represented as control circuits of that sort; there could be some relatively sophisticated modeling there as well, just hidden from my conscious ratiocination.
Now it’s my turn to be puzzled about whether we’re disagreeing. Isn’t this quite compatible with what I wrote in the second paragraph?
I’m not disagreeing with what you said, I’m only disagreeing with what you said I said. Clearer now? ;-)
I haven’t yet seen evidence that would lead me to conclude all of my unconscious mental processes are best represented as control circuits of that sort; there could be some relatively sophisticated modeling there as well, just hidden from my conscious ratiocination.
It’s true that PCT (at least as described in the 1973 book) doesn’t take adequate account of predictive modeling. The model that I was working with (even before I found out about the “memory-prediction” framework) was that people’s feelings are readouts of predictions the brain makes, based on simple pattern recognition of relevant memories… aka, the “somatic marker hypothesis”.
What I’ve realized since finding out about PCT, is that these predictions can be viewed as memory-based linkages between controllers—they predict, “if this perception goes to this level, then that perception will go to that level”, e.g. “IF I have to work hard, THEN I’m not smart enough”.
I already had this sort of IF-THEN rule formulation in my model (described in the first draft of TTD), but what I was missing then is that in order for a predictive rule like this to be meaningful, the target of the “then” has to be some quantity under control—like “self-esteem” or “smartness” in the previous example.
In the past, I considered these sort of simple predictive rules to be the primary drivers of human behavior (including rationalizations and other forms of verbal thinking), and they were the primary targets of my mindhacking work, because changing them changed people’s automatic responses and behavior, and quite often changed them permanently. (Presumably, in cases where we found a critical point or points in the controller network.)
This seemed like a sufficient model to me, pre-PCT, because it was easy to find these System 1 rules just underneath System 2′s thinking, whenever a belief or behavior pattern wasn’t working for someone.
Post-PCT, however, I realized that these rules are purely transitional—merely a subset of the edges of the control hierarchy graph. Where before I assumed that they were passive data, subject to arbitrary manipulation (i.e. mind-hacking), it’s become clear now that the system as a whole can add or drop these rules on the basis of their effects on the controllers.
Anyway, I’m probably getting into too much detail, now, but the point is that I agree with you: merely having controllers is not enough to model human behavior; you also need the memory-predictive links and somatic markers (that were already in my model), and you need PCT’s idea of the “reorganization system”—something that might be compared to an AI’s ability to rewrite its source code, only much much dumber. More like a simple genetic-programming optimizer, I would guess.
Yes, but that wasn’t my main objection. My main objection is that “trying to prevent us from doing the things we want to do” implies an opponent whose goal is to frustrate you, rather than a blind controller simply trying to restore a variable to its programmed range.
It’s not “out to get you” in some fashion, and far too much self-help material creates that kind of paranoia already. Certainly, I don’t want anybody getting the impression that I promote such irrational paranoia myself.
Now it’s my turn to be puzzled about whether we’re disagreeing. Isn’t this quite compatible with what I wrote in the second paragraph?
In any case, IAWYC, but I haven’t yet seen evidence that would lead me to conclude all of my unconscious mental processes are best represented as control circuits of that sort; there could be some relatively sophisticated modeling there as well, just hidden from my conscious ratiocination.
I’m not disagreeing with what you said, I’m only disagreeing with what you said I said. Clearer now? ;-)
It’s true that PCT (at least as described in the 1973 book) doesn’t take adequate account of predictive modeling. The model that I was working with (even before I found out about the “memory-prediction” framework) was that people’s feelings are readouts of predictions the brain makes, based on simple pattern recognition of relevant memories… aka, the “somatic marker hypothesis”.
What I’ve realized since finding out about PCT, is that these predictions can be viewed as memory-based linkages between controllers—they predict, “if this perception goes to this level, then that perception will go to that level”, e.g. “IF I have to work hard, THEN I’m not smart enough”.
I already had this sort of IF-THEN rule formulation in my model (described in the first draft of TTD), but what I was missing then is that in order for a predictive rule like this to be meaningful, the target of the “then” has to be some quantity under control—like “self-esteem” or “smartness” in the previous example.
In the past, I considered these sort of simple predictive rules to be the primary drivers of human behavior (including rationalizations and other forms of verbal thinking), and they were the primary targets of my mindhacking work, because changing them changed people’s automatic responses and behavior, and quite often changed them permanently. (Presumably, in cases where we found a critical point or points in the controller network.)
This seemed like a sufficient model to me, pre-PCT, because it was easy to find these System 1 rules just underneath System 2′s thinking, whenever a belief or behavior pattern wasn’t working for someone.
Post-PCT, however, I realized that these rules are purely transitional—merely a subset of the edges of the control hierarchy graph. Where before I assumed that they were passive data, subject to arbitrary manipulation (i.e. mind-hacking), it’s become clear now that the system as a whole can add or drop these rules on the basis of their effects on the controllers.
Anyway, I’m probably getting into too much detail, now, but the point is that I agree with you: merely having controllers is not enough to model human behavior; you also need the memory-predictive links and somatic markers (that were already in my model), and you need PCT’s idea of the “reorganization system”—something that might be compared to an AI’s ability to rewrite its source code, only much much dumber. More like a simple genetic-programming optimizer, I would guess.