Thanks for the detailed reply; I’d like to have the metadiscussion with you, but what exactly would you consider a better place to have it? I’ve had a reply to you on “why mutual information = model” not yet completed, so I guess I could start another top-level post that addresses these issues.
Anyway:
This other program fits a control model to the human performance in that task, with only a few parameters. … Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it. You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
I choose this example because it’s an immensely difficult task just to program object recognition. To say that my behavior is explained by trying to track some reference we don’t even know how to define, and by applying an operation to sense data we don’t understand yet, does not look like a simplification!
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
Remember, to say you “want” something or that “recognize” something hides an immense amount of complexity, which is why I don’t see how it helps to restate these problems as control problems.
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it.
It doesn’t “just happen” to work. It works for the same reason that, say, a chemist’s description of a chemical reaction works: because the description describes what is actually happening.
Besides, according to the philosophy you expressed, all that matters in compressing the data. A few numbers to compress with high fidelity an arbitrarily large amount of data is pretty good, I would have thought. ETA: Compare how just one number: local gravitational strength, suffices to predict the path of a thrown rock, given the right theory.
Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can’t possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don’t even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog.
You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
The alternate, existing explanations are worth no more than alchemical theories of four elements. It’s possible to go back and look at the alchemists’ accounts of their experiments, but there’s really not much point except historical interest. They were asking the wrong questions and making the wrong observations, using wrong theories. Even if you can work out what someone was doing, it isn’t going to cast light on chemistry, only on history.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
You’re demanding that the new point of view instantly explain everything. But FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
You’re demanding that the new point of view instantly explain everything.
I’m demanding that it explain exactly what you claimed it could explain: behavior!
FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
Yep, that confirms exactly what I was expecting: you’ve just relabeled the problem; you haven’t simplified it. Your model tells me nothing except “this is what you could do, once you did all the real work in understanding this phenomenon, which you got some other way”.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
Poor comparison. Newtonian mechanics doesn’t give me a an answer to the general n-body problem, but it gives me more than enough to generate a numerical solution to any specific n-body problem.
Your model isn’t even in the same league. It just says the equivalent of, “Um, the bodies move in a, you know, gravitational-like manner, they figure out where gravity wants them to go, and they bring that all, into effect.”
It feels like an explanation, but it isn’t. The scientific answer would look more like, “The net acceleration any body experiences is equal to the vector sum of the forces on the body obtained from the law of gravitation, divided by its mass. To plot the paths, start with the initial positions and velocities, find the accelerations, and then up date the positions and start over.”
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)
Thanks for the detailed reply; I’d like to have the metadiscussion with you, but what exactly would you consider a better place to have it? I’ve had a reply to you on “why mutual information = model” not yet completed, so I guess I could start another top-level post that addresses these issues.
Anyway:
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it. You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
I choose this example because it’s an immensely difficult task just to program object recognition. To say that my behavior is explained by trying to track some reference we don’t even know how to define, and by applying an operation to sense data we don’t understand yet, does not look like a simplification!
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
Remember, to say you “want” something or that “recognize” something hides an immense amount of complexity, which is why I don’t see how it helps to restate these problems as control problems.
It doesn’t “just happen” to work. It works for the same reason that, say, a chemist’s description of a chemical reaction works: because the description describes what is actually happening.
Besides, according to the philosophy you expressed, all that matters in compressing the data. A few numbers to compress with high fidelity an arbitrarily large amount of data is pretty good, I would have thought. ETA: Compare how just one number: local gravitational strength, suffices to predict the path of a thrown rock, given the right theory.
Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can’t possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don’t even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog.
The alternate, existing explanations are worth no more than alchemical theories of four elements. It’s possible to go back and look at the alchemists’ accounts of their experiments, but there’s really not much point except historical interest. They were asking the wrong questions and making the wrong observations, using wrong theories. Even if you can work out what someone was doing, it isn’t going to cast light on chemistry, only on history.
You’re demanding that the new point of view instantly explain everything. But FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
See pjeby’s reply. He gets it.
I’m demanding that it explain exactly what you claimed it could explain: behavior!
Yep, that confirms exactly what I was expecting: you’ve just relabeled the problem; you haven’t simplified it. Your model tells me nothing except “this is what you could do, once you did all the real work in understanding this phenomenon, which you got some other way”.
Poor comparison. Newtonian mechanics doesn’t give me a an answer to the general n-body problem, but it gives me more than enough to generate a numerical solution to any specific n-body problem.
Your model isn’t even in the same league. It just says the equivalent of, “Um, the bodies move in a, you know, gravitational-like manner, they figure out where gravity wants them to go, and they bring that all, into effect.”
It feels like an explanation, but it isn’t. The scientific answer would look more like, “The net acceleration any body experiences is equal to the vector sum of the forces on the body obtained from the law of gravitation, divided by its mass. To plot the paths, start with the initial positions and velocities, find the accelerations, and then up date the positions and start over.”
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)