Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)