So are you saying that any program can be modeled by PCT better than by looking at the program itself, or that although this particular robot isn’t PCT, a hypothetical robot that was more reflective of real human behavior would be?
I am saying that this particular robot (without the add-on human module) is a control system. It consists of nothing more than that single control system. It contains no representation of any part of itself. It does not reflect on its nature, or try to find other ways of achieving its goal.
The hierarchical arrangement of control systems that HPCT (Hierarchical PCT) ascribes to humans and other living organisms, is more complex. Humans have goals that are instrumental towards other goals, and which are discarded as soon as they become ineffective for those higher-level goals.
As for goals, if I understand your definition correctly, even a behaviorist system could be said to have goals (if you reinforce it every time it pulls the lever, then its new goal will be to pull a lever). If that’s your definition, I agree that this robot has goals, and I would rephrase my thesis as being that those goals are not context-independent and reflective.
Behaviourism is a whole other can of worms. It models living organisms as stimulus-response systems, in which outputs are determined by perceptions. PCT is the opposite: perceptions are determined by outputs.
I agree with you that behaviorism and PCT are different, which is why I don’t understand why you’re interpreting the robot as PCT and not behaviorist. From the program, it seems pretty clearly (STIMULUS: see blue → RESPONSE: fire laser) to me.
Do you have GChat or any kind of instant messenger? I feel like real-time discussion might be helpful here, because I’m still not getting it.
I agree with you that behaviorism and PCT are different, which is why I don’t understand why you’re interpreting the robot as PCT and not behaviorist. From the program, it seems pretty clearly (STIMULUS: see blue → RESPONSE: fire laser) to me.
Well, your robot example was an intuition pump constructed so as to be as close as possible to stimulus-response nature. If you consider something only slightly more complicated the distinction may become clearer: a room thermostat. Physically ripped out of its context, you can see it as a stimulus-response device. Temperature at sensor goes above threshold --> close a switch, temperature falls below threshold --> open the switch. You can set the temperature of the sensor to anything you like, and observe the resulting behaviour of the switch. Pure S-R.
In context, though, the thermostat has the effect of keeping the room temperature constant. You can no longer set the temperature of the sensor to anything you like. Put a candle near it, and the temperature of the rest of the room will fall while the sensor remains at a constant temperature. Use a strong enough heat source or cold source, and you will be able to overwhelm the control system’s efforts to maintain a constant temperature, but this fails to tell you anything about how the control system works normally. Do the analogous thing to a living organism and you either kill it or put it under such stress that whatever you observe is unlikely to tell you much about its normal operation—and biology and psychology should be about how organisms work, not how they fail under torture.
Did you know that lab rats are normally starved until they have lost 20% of their free-feeding weight, before using them in behavioural experiments?
Here’s a general block diagram of a control system. The controller is the part above the dotted line and its environment the part below (what would be called the plant in an industrial context). R = reference, P = perception, O = output, D = disturbance (everything in the environment besides O that affects the perception). I have deliberately drawn this to look symmetrical, but the contents of those two boxes makes its functioning asymmetrical. P remains close to R, but O and D need have no visible relationship at all.
R |
|
V
+-------+
| |
+--->| |----+
| | | |
^ +-------+ v
| |
....... P | ............... | O .......
| |
^ +-------+ v
| | | |
+----| |<---+
| |
+-------+
^
|
D |
When you are dealing with a living organism, R is somewhere inside it. You probably cannot measure it even if you know it exist. (E.g. just what and where, physically, is the set point for deep body temperature in a mammal? Not an easy question to answer.) You may or may not know what P is—what the organism is actually sensing. It is important to realise that when you perform an experiment on an animal, you have no way of setting P. All you can do is create a disturbance D that may influence P. D, from a behavioural point of view, is the “stimulus” and O, the creature’s action on its environment, is the “response”. the behaviourist description of the situation is this:
+-------+
D | | O
----->| |----->
| |
+-------+
This is simply wrong. The system does not work like that and cannot be understood like that. It may look as if D causes O, but that is like thinking that a candle put in a certain place chills the room, a fact that will seem mysterious and paradoxical when you do not know that the thermostat is present, and will only be explained by discovering the actual mechanism, discarding the second diagram in favour of the first. No amount of data collection will help until one has made that change. This is why correlations are so lamentably low in psychological experiments.
Do you have GChat or any kind of instant messenger?
No, I’ve never used any of those systems. I prefer a medium in which I can take my time to work out exactly what I want to say.
Okay, we agree that the simple robot described here is behaviorist and the thermostat is PCT. And I certainly see where you’re coming from with the rats being PCT because hunger only works as a motivator if you’re hungry. But I do have a few questions:
There are some things behaviorism can explain pretty well that I don’t know how to model in PCT. For example, consider heroin addiction. An animal can go its whole life not wanting heroin until it’s exposed to some. Then suddenly heroin becomes extraordinarily motivating and it will preferentially choose shots of heroin to food, water, or almost anything else. What is the PCT explanation of that?
I’m not entirely sure which correlation studies you’re talking about here; most psych studies I read are done in an RCT type design and so use p-values rather than r-values; they can easily end up with p < .001 if they get a large sample and a good hypothesis. Some social psych studies work off of correlations (eg correlation between being observer-rated attractiveness and observer-rated competence at a skill); correlations are “lamentably low” in social psychology because high level processes (like opinion formation, social interaction, etc.) have a lot of noise. Are there any PCT studies of these sorts of processes (not simple motor coordination problems) that have any higher correlation than standard models do? Any with even the same level of correlation?
What’s the difference between control theory and stimulus-response in a context? For example, if we use a simplified version of hunger in which the hormone leptin is produced in response to hunger and the hormone ghrelin is produced in response to satiety, we can explain this in two ways: the body is trying to PCT itself to the perfect balance of leptin and ghrelin, or in the context of the stimulus leptin the response of eating is rewarded and in the context of the stimulus ghrelin the response of eating is punished. Are these the same theory, or are there experiments that would distinguish between them? Do you know of any?
Does PCT still need reinforcement learning to explain why animals use some strategies and not others to achieve equilibrium? For example, when a rat in a Skinner box is hungry (ie its satiety variable has deviated in the direction of hunger), and then it presses a lever and gets a food pellet and its satiety variable goes back to its reference range, would PCTists explain that as getting rewarded for pressing the lever and expect it to press the lever again next time it’s hungry?
An animal can go its whole life not wanting heroin until it’s exposed to some. Then suddenly heroin becomes extraordinarily motivating and it will preferentially choose shots of heroin to food, water, or almost anything else
Summary: An experimenter thought drug addiction in rats might be linked to being kept in distressing conditions, made a Rat Park to test the idea, and found that the rats in the enriched Rat Park environment ignored the morphine on offer.
EDIT: apparently the study had methodological issues and hasn’t been replicated, making the results somewhat suspect, as pointed out by Yvain below
I hate to admit I get science knowledge from Reddit, but the past few times this was posted there it was ripped apart by (people who claimed to be) professionals in the field—riddled with metholodogical errors, inconsistently replicated, et cetera. The fact that even its proponents admit the study was rejected by most journals doesn’t speak well of it.
I think it’s very plausible that situation contributes to addiction; we know that people in terrible situations have higher discount rates than others and so tend to short-term thinking that promotes that kind of behavior, and certainly they have fewer reasons to try to live life as a non-addict. But I think the idea that morphine is no longer interesting and you can’t become addicted when you live a stimulating life is wishful thinking.
Well, like I said, all I have to go on is stuff people said on Reddit and one failed replication study I was able to find somewhere by a grad student of the guy who did the original research. The original research is certainly interesting and relevant and does speak to the problems with a very reductionist model.
This actually gets to the same problem I’m having looking up stuff on perceptual control theory, which is that I expect a controversial theory to be something where there are lots of passionate arguments on both sides, but on both PCT and Rat Park, when I’ve tried to look them up I get a bunch of passionate people arguing that they’re great, and then a few scoffs from more mainstream people saying “That stuff? Nah.” without explaining themselves. I don’t know whether it’s because of Evil Set-In-Their-Ways Mainstream refusing to acknowledge the new ideas, or whether they’re just so completely missing the point that people think it’s not worth their while to respond. It’s a serious problem and I wish that “skeptics” would start addressing this kind of thing instead of debunking ghosts for the ten zillionth time.
Just a brief note to say that I do intend to get back to this, but I’ve been largely offline since the end of last week, and will be very busy at least until the end of this month on things other than LessWrong. I would like to say a lot more about PCT here than I have in the past (here, here, and in various comments), but these things take me long periods of concentrated effort to write.
BTW, one of the things I’m busy with is PCT itself, and I’ll be in Boulder, Colorado for a PCT-related meeting 28-31 July, and staying on there for a few days. Anyone around there then?
For example, when a rat in a Skinner box is hungry (ie its satiety variable has deviated in the direction of hunger), and then it presses a lever and gets a food pellet and its satiety variable goes back to its reference range, would PCTists explain that as getting rewarded for pressing the lever and expect it to press the lever again next time it’s hungry?
The PCT learning model doesn’t require reinforcement at the control level, as its model of memory is a mapping from reference levels to predicted levels of other variables. I.e., when the rat notices that the lever-pressing is paired with food, a link is made between two perceptual variables: the position of the lever, and the availability of food. This means that the rat can learn that food is available, even when it’s not hungry.
Where reinforcement is relevant to PCT is in the strength of the linkage and in the likelihood of its being recorded. If the rat is hungry, then the linkage is more salient, and more likely to be learned.
Notice though, that again the animal’s internal state is of primary importance, not the stimulus/response. In a sense, you could say that you can teach an animal that a stimulus and response are paired, but this isn’t the same as making the animal behave. If we starved you and made you press a lever for your food, you might do it, or you might tell us to fork off. Yet, we don’t claim that you haven’t learned that pressing the lever leads to food in that case.
(As Richard says, it’s well established that you can torture living creatures until they accede to your demands, but it won’t necessarily tell you much about how the creature normally works.)
In any case, PCT allows for the possibility of learning without “reinforcement” in the behaviorist sense, unless you torture the definition of reinforcement to the point that anything is reinforcement.
Regarding the leptin/ghrelin question, my understanding is that PCT as a psych-physical model primarily addresses those perceptual variables that are modeled by neural analog—i.e., an analog level maintained in a neural delay loop. While Powers makes many references to other sorts of negative feedback loops in various organisms from cats to E. coli, the main thrust of his initial book deals with building up a model of what’s going on, feedback-loopwise, in the nervous system and brain, not the body’s endocrine systems.
To put it another way, PCT doesn’t say that control systems are universal, only that they are ubiquitous, and that the bulk of organisms’ neural systems are assembled from a relatively small number of distinct component types that closely resemble the sort of components that humans use when building machinery.
IOW, we should not expect that PCT’s model of neural control systems would be directly applicable to a hormone level issue. However, we can reason from general principles and say that one difference between a PCT model of the leptin/ghrelin question is that PCT includes an explicit model of hierarchy and conflict in control networks, so that we can answer questions about what happens if both leptin and ghrelin are present (for example).
If those signals are at the same level of control hierarchy, we can expect conflict to result in oscillation, where the system alternates between trying to satisfy one or the other. Or, if they’re at different levels of hierarchy, then we can expect one to override the other.
But, unlike a behavioral model where the question of precedence between different stimuli and contexts is open to interpretation, PCT makes some testable predictions about what actually constitutes hierarchy, both in terms of expected behavior, and in terms of the physical structure of the underlying control circuitry.
That is, if you could dissect an organism and find the neurons, PCT predicts a certain type of wiring to exist, i.e., that a dominant controller will have wiring to set the reference levels for lower-level controllers, but not vice-versa.
Second, PCT predicts that a dominant perception must be measured at a longer time scale than a dominated one. That is, the lower-level perception must have a higher sampling rate than the higher-level perception. Thus, for example, as a rat becomes hungrier (a longer-term perceptual variable), its likelihood of pressing a lever to receive food in spite of a shock is increased.
AFAICT, behaviorism can “explain” results like these, but does not actually predict them, in the sense that PCT is spelling out implementation-level details that behaviorism leaves to hand-waving. IOW, PCT is considerably more falsifiable than behaviorism, at least in principle. Eventually, PCT’s remaining predictions (i.e., the ones that haven’t already panned out at the anatomical level) will either be proven or disproven, while behaviorism doesn’t really make anatomical predictions about these matters.
To answer question 3, one could perform the experiment of surgically balancing leptin and ghrelin and not feeding or otherwise nourishing the subject. If the subject eventually dies of starvation, I would say the second theory is more likely.
Outstanding comment—particularly the point at the end about the candle cooling the room.
It might be worthwhile to produce a sequence of postings on the control systems perspective—particularly if you could use better-looking block diagrams as illustrations. :)
My interpretation of this interaction (which is fascinating to read, btw, because both of you are eloquently defending a cogent and interesting theory as far as I can tell) is that you’ve indirectly proposed Robot-1 as the initial model of an agent (which is clearly not a full model of a person and fails to capture many features of humans) in the first of a series of articles. I think Richard is objecting to the connections he presumes that you will eventually draw between Robot-1 and actual humans, and you’re getting confused because you’re just trying to talk about the thing you actually said, not the eventual conclusions he expects you to draw from your example.
If he’s expecting you to verbally zig when you’re actually planning to zag and you don’t notice that he’s trying to head you off at a pass you’re not even heading towards, its entirely reasonable for you to be confused by what he’s saying. (And if some of the audience also thinks you’re going to zig they’ll also see the theory he’s arguing against, and see that his arguments against “your predicted eventual conclusions” are valid, and upvote his criticism of something you haven’t yet said. And both of you are quite thoughtful and polite and educated so its good reading even if there is some confusion mixed into the back and forth.)
The place I think you were ambiguous enough to be misinterpreted was roughly here:
Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.
You use the phrase “human level intelligence” and talk about the robot making the same fuzzy inferential leap that outside human observer’s might make. Also, this is remarkably close to how some humans with very poor impulse control actually seem to function, modulo some different reflexes and an moderately unreasonable belief in their own deliberative agency (a la Blindsight with the “Jubyr fcrpvrf vf ntabfvp ol qrsnhyg” line and so on).
If you had said up front that you’re using this as a toy model which has (for example) too few layers and no feedback from the “meta-observer” module to be a honestly plausible model of “properly functioning cohesively agentive mammals” I think Richard would not have made the mistake that I think he’s making about what you’re about to say. He keeps talking about a robust and vastly more complex model than Robot-1 (that being a multi-layer purposive control system) and talking about how not just hypothetical PCT algorithms but actual humans function and you haven’t directly answered these concerns by saying clearly “I am not talking about humans yet, I’m just building conceptual vocabulary by showing how something clearly simpler might function to illustrate mechanistic thinking about mental processes”.
It might have helped if you were clear about the possibility that Robot-1 would emit words more like we might expect someone to emit several years after a serious brain lesion that severed some vital connections in their brain, after they’re verbal reasoning systems had updated on the lack of a functional connection between their conscious/verbal brain parts and their deeper body control systems. Like Robot-1 seems likely to me to end up saying something like “Watch out, I’m not just having a mental breakdown but I’ve never had any control over my body+brainstems’s actions in the first place! I have no volitional control over my behavior! If you’re wearing blue then take off the shirt or run away before I happen to turn around and see you and my reflex kicks in and my body tries to kill you. Dear god this sucks! Oh how I wish my mental architecture wasn’t so broken...”
For what its worth, I think the Robot-1 example is conceptually useful and I’m really looking forward to seeing how the whole sequence plays out :-)
I am saying that this particular robot (without the add-on human module) is a control system. It consists of nothing more than that single control system. It contains no representation of any part of itself. It does not reflect on its nature, or try to find other ways of achieving its goal.
The hierarchical arrangement of control systems that HPCT (Hierarchical PCT) ascribes to humans and other living organisms, is more complex. Humans have goals that are instrumental towards other goals, and which are discarded as soon as they become ineffective for those higher-level goals.
Behaviourism is a whole other can of worms. It models living organisms as stimulus-response systems, in which outputs are determined by perceptions. PCT is the opposite: perceptions are determined by outputs.
I agree with you that behaviorism and PCT are different, which is why I don’t understand why you’re interpreting the robot as PCT and not behaviorist. From the program, it seems pretty clearly (STIMULUS: see blue → RESPONSE: fire laser) to me.
Do you have GChat or any kind of instant messenger? I feel like real-time discussion might be helpful here, because I’m still not getting it.
Well, your robot example was an intuition pump constructed so as to be as close as possible to stimulus-response nature. If you consider something only slightly more complicated the distinction may become clearer: a room thermostat. Physically ripped out of its context, you can see it as a stimulus-response device. Temperature at sensor goes above threshold --> close a switch, temperature falls below threshold --> open the switch. You can set the temperature of the sensor to anything you like, and observe the resulting behaviour of the switch. Pure S-R.
In context, though, the thermostat has the effect of keeping the room temperature constant. You can no longer set the temperature of the sensor to anything you like. Put a candle near it, and the temperature of the rest of the room will fall while the sensor remains at a constant temperature. Use a strong enough heat source or cold source, and you will be able to overwhelm the control system’s efforts to maintain a constant temperature, but this fails to tell you anything about how the control system works normally. Do the analogous thing to a living organism and you either kill it or put it under such stress that whatever you observe is unlikely to tell you much about its normal operation—and biology and psychology should be about how organisms work, not how they fail under torture.
Did you know that lab rats are normally starved until they have lost 20% of their free-feeding weight, before using them in behavioural experiments?
Here’s a general block diagram of a control system. The controller is the part above the dotted line and its environment the part below (what would be called the plant in an industrial context). R = reference, P = perception, O = output, D = disturbance (everything in the environment besides O that affects the perception). I have deliberately drawn this to look symmetrical, but the contents of those two boxes makes its functioning asymmetrical. P remains close to R, but O and D need have no visible relationship at all.
When you are dealing with a living organism, R is somewhere inside it. You probably cannot measure it even if you know it exist. (E.g. just what and where, physically, is the set point for deep body temperature in a mammal? Not an easy question to answer.) You may or may not know what P is—what the organism is actually sensing. It is important to realise that when you perform an experiment on an animal, you have no way of setting P. All you can do is create a disturbance D that may influence P. D, from a behavioural point of view, is the “stimulus” and O, the creature’s action on its environment, is the “response”. the behaviourist description of the situation is this:
This is simply wrong. The system does not work like that and cannot be understood like that. It may look as if D causes O, but that is like thinking that a candle put in a certain place chills the room, a fact that will seem mysterious and paradoxical when you do not know that the thermostat is present, and will only be explained by discovering the actual mechanism, discarding the second diagram in favour of the first. No amount of data collection will help until one has made that change. This is why correlations are so lamentably low in psychological experiments.
No, I’ve never used any of those systems. I prefer a medium in which I can take my time to work out exactly what I want to say.
Okay, we agree that the simple robot described here is behaviorist and the thermostat is PCT. And I certainly see where you’re coming from with the rats being PCT because hunger only works as a motivator if you’re hungry. But I do have a few questions:
There are some things behaviorism can explain pretty well that I don’t know how to model in PCT. For example, consider heroin addiction. An animal can go its whole life not wanting heroin until it’s exposed to some. Then suddenly heroin becomes extraordinarily motivating and it will preferentially choose shots of heroin to food, water, or almost anything else. What is the PCT explanation of that?
I’m not entirely sure which correlation studies you’re talking about here; most psych studies I read are done in an RCT type design and so use p-values rather than r-values; they can easily end up with p < .001 if they get a large sample and a good hypothesis. Some social psych studies work off of correlations (eg correlation between being observer-rated attractiveness and observer-rated competence at a skill); correlations are “lamentably low” in social psychology because high level processes (like opinion formation, social interaction, etc.) have a lot of noise. Are there any PCT studies of these sorts of processes (not simple motor coordination problems) that have any higher correlation than standard models do? Any with even the same level of correlation?
What’s the difference between control theory and stimulus-response in a context? For example, if we use a simplified version of hunger in which the hormone leptin is produced in response to hunger and the hormone ghrelin is produced in response to satiety, we can explain this in two ways: the body is trying to PCT itself to the perfect balance of leptin and ghrelin, or in the context of the stimulus leptin the response of eating is rewarded and in the context of the stimulus ghrelin the response of eating is punished. Are these the same theory, or are there experiments that would distinguish between them? Do you know of any?
Does PCT still need reinforcement learning to explain why animals use some strategies and not others to achieve equilibrium? For example, when a rat in a Skinner box is hungry (ie its satiety variable has deviated in the direction of hunger), and then it presses a lever and gets a food pellet and its satiety variable goes back to its reference range, would PCTists explain that as getting rewarded for pressing the lever and expect it to press the lever again next time it’s hungry?
Rats don’t always choose drugs over everything else
Summary: An experimenter thought drug addiction in rats might be linked to being kept in distressing conditions, made a Rat Park to test the idea, and found that the rats in the enriched Rat Park environment ignored the morphine on offer.
EDIT: apparently the study had methodological issues and hasn’t been replicated, making the results somewhat suspect, as pointed out by Yvain below
I hate to admit I get science knowledge from Reddit, but the past few times this was posted there it was ripped apart by (people who claimed to be) professionals in the field—riddled with metholodogical errors, inconsistently replicated, et cetera. The fact that even its proponents admit the study was rejected by most journals doesn’t speak well of it.
I think it’s very plausible that situation contributes to addiction; we know that people in terrible situations have higher discount rates than others and so tend to short-term thinking that promotes that kind of behavior, and certainly they have fewer reasons to try to live life as a non-addict. But I think the idea that morphine is no longer interesting and you can’t become addicted when you live a stimulating life is wishful thinking.
Damn. Oh well, noted and edited in to the original comment.
Well, like I said, all I have to go on is stuff people said on Reddit and one failed replication study I was able to find somewhere by a grad student of the guy who did the original research. The original research is certainly interesting and relevant and does speak to the problems with a very reductionist model.
This actually gets to the same problem I’m having looking up stuff on perceptual control theory, which is that I expect a controversial theory to be something where there are lots of passionate arguments on both sides, but on both PCT and Rat Park, when I’ve tried to look them up I get a bunch of passionate people arguing that they’re great, and then a few scoffs from more mainstream people saying “That stuff? Nah.” without explaining themselves. I don’t know whether it’s because of Evil Set-In-Their-Ways Mainstream refusing to acknowledge the new ideas, or whether they’re just so completely missing the point that people think it’s not worth their while to respond. It’s a serious problem and I wish that “skeptics” would start addressing this kind of thing instead of debunking ghosts for the ten zillionth time.
Just a brief note to say that I do intend to get back to this, but I’ve been largely offline since the end of last week, and will be very busy at least until the end of this month on things other than LessWrong. I would like to say a lot more about PCT here than I have in the past (here, here, and in various comments), but these things take me long periods of concentrated effort to write.
BTW, one of the things I’m busy with is PCT itself, and I’ll be in Boulder, Colorado for a PCT-related meeting 28-31 July, and staying on there for a few days. Anyone around there then?
The PCT learning model doesn’t require reinforcement at the control level, as its model of memory is a mapping from reference levels to predicted levels of other variables. I.e., when the rat notices that the lever-pressing is paired with food, a link is made between two perceptual variables: the position of the lever, and the availability of food. This means that the rat can learn that food is available, even when it’s not hungry.
Where reinforcement is relevant to PCT is in the strength of the linkage and in the likelihood of its being recorded. If the rat is hungry, then the linkage is more salient, and more likely to be learned.
Notice though, that again the animal’s internal state is of primary importance, not the stimulus/response. In a sense, you could say that you can teach an animal that a stimulus and response are paired, but this isn’t the same as making the animal behave. If we starved you and made you press a lever for your food, you might do it, or you might tell us to fork off. Yet, we don’t claim that you haven’t learned that pressing the lever leads to food in that case.
(As Richard says, it’s well established that you can torture living creatures until they accede to your demands, but it won’t necessarily tell you much about how the creature normally works.)
In any case, PCT allows for the possibility of learning without “reinforcement” in the behaviorist sense, unless you torture the definition of reinforcement to the point that anything is reinforcement.
Regarding the leptin/ghrelin question, my understanding is that PCT as a psych-physical model primarily addresses those perceptual variables that are modeled by neural analog—i.e., an analog level maintained in a neural delay loop. While Powers makes many references to other sorts of negative feedback loops in various organisms from cats to E. coli, the main thrust of his initial book deals with building up a model of what’s going on, feedback-loopwise, in the nervous system and brain, not the body’s endocrine systems.
To put it another way, PCT doesn’t say that control systems are universal, only that they are ubiquitous, and that the bulk of organisms’ neural systems are assembled from a relatively small number of distinct component types that closely resemble the sort of components that humans use when building machinery.
IOW, we should not expect that PCT’s model of neural control systems would be directly applicable to a hormone level issue. However, we can reason from general principles and say that one difference between a PCT model of the leptin/ghrelin question is that PCT includes an explicit model of hierarchy and conflict in control networks, so that we can answer questions about what happens if both leptin and ghrelin are present (for example).
If those signals are at the same level of control hierarchy, we can expect conflict to result in oscillation, where the system alternates between trying to satisfy one or the other. Or, if they’re at different levels of hierarchy, then we can expect one to override the other.
But, unlike a behavioral model where the question of precedence between different stimuli and contexts is open to interpretation, PCT makes some testable predictions about what actually constitutes hierarchy, both in terms of expected behavior, and in terms of the physical structure of the underlying control circuitry.
That is, if you could dissect an organism and find the neurons, PCT predicts a certain type of wiring to exist, i.e., that a dominant controller will have wiring to set the reference levels for lower-level controllers, but not vice-versa.
Second, PCT predicts that a dominant perception must be measured at a longer time scale than a dominated one. That is, the lower-level perception must have a higher sampling rate than the higher-level perception. Thus, for example, as a rat becomes hungrier (a longer-term perceptual variable), its likelihood of pressing a lever to receive food in spite of a shock is increased.
AFAICT, behaviorism can “explain” results like these, but does not actually predict them, in the sense that PCT is spelling out implementation-level details that behaviorism leaves to hand-waving. IOW, PCT is considerably more falsifiable than behaviorism, at least in principle. Eventually, PCT’s remaining predictions (i.e., the ones that haven’t already panned out at the anatomical level) will either be proven or disproven, while behaviorism doesn’t really make anatomical predictions about these matters.
To answer question 3, one could perform the experiment of surgically balancing leptin and ghrelin and not feeding or otherwise nourishing the subject. If the subject eventually dies of starvation, I would say the second theory is more likely.
Outstanding comment—particularly the point at the end about the candle cooling the room.
It might be worthwhile to produce a sequence of postings on the control systems perspective—particularly if you could use better-looking block diagrams as illustrations. :)
My interpretation of this interaction (which is fascinating to read, btw, because both of you are eloquently defending a cogent and interesting theory as far as I can tell) is that you’ve indirectly proposed Robot-1 as the initial model of an agent (which is clearly not a full model of a person and fails to capture many features of humans) in the first of a series of articles. I think Richard is objecting to the connections he presumes that you will eventually draw between Robot-1 and actual humans, and you’re getting confused because you’re just trying to talk about the thing you actually said, not the eventual conclusions he expects you to draw from your example.
If he’s expecting you to verbally zig when you’re actually planning to zag and you don’t notice that he’s trying to head you off at a pass you’re not even heading towards, its entirely reasonable for you to be confused by what he’s saying. (And if some of the audience also thinks you’re going to zig they’ll also see the theory he’s arguing against, and see that his arguments against “your predicted eventual conclusions” are valid, and upvote his criticism of something you haven’t yet said. And both of you are quite thoughtful and polite and educated so its good reading even if there is some confusion mixed into the back and forth.)
The place I think you were ambiguous enough to be misinterpreted was roughly here:
You use the phrase “human level intelligence” and talk about the robot making the same fuzzy inferential leap that outside human observer’s might make. Also, this is remarkably close to how some humans with very poor impulse control actually seem to function, modulo some different reflexes and an moderately unreasonable belief in their own deliberative agency (a la Blindsight with the “Jubyr fcrpvrf vf ntabfvp ol qrsnhyg” line and so on).
If you had said up front that you’re using this as a toy model which has (for example) too few layers and no feedback from the “meta-observer” module to be a honestly plausible model of “properly functioning cohesively agentive mammals” I think Richard would not have made the mistake that I think he’s making about what you’re about to say. He keeps talking about a robust and vastly more complex model than Robot-1 (that being a multi-layer purposive control system) and talking about how not just hypothetical PCT algorithms but actual humans function and you haven’t directly answered these concerns by saying clearly “I am not talking about humans yet, I’m just building conceptual vocabulary by showing how something clearly simpler might function to illustrate mechanistic thinking about mental processes”.
It might have helped if you were clear about the possibility that Robot-1 would emit words more like we might expect someone to emit several years after a serious brain lesion that severed some vital connections in their brain, after they’re verbal reasoning systems had updated on the lack of a functional connection between their conscious/verbal brain parts and their deeper body control systems. Like Robot-1 seems likely to me to end up saying something like “Watch out, I’m not just having a mental breakdown but I’ve never had any control over my body+brainstems’s actions in the first place! I have no volitional control over my behavior! If you’re wearing blue then take off the shirt or run away before I happen to turn around and see you and my reflex kicks in and my body tries to kill you. Dear god this sucks! Oh how I wish my mental architecture wasn’t so broken...”
For what its worth, I think the Robot-1 example is conceptually useful and I’m really looking forward to seeing how the whole sequence plays out :-)