For example, when a rat in a Skinner box is hungry (ie its satiety variable has deviated in the direction of hunger), and then it presses a lever and gets a food pellet and its satiety variable goes back to its reference range, would PCTists explain that as getting rewarded for pressing the lever and expect it to press the lever again next time it’s hungry?
The PCT learning model doesn’t require reinforcement at the control level, as its model of memory is a mapping from reference levels to predicted levels of other variables. I.e., when the rat notices that the lever-pressing is paired with food, a link is made between two perceptual variables: the position of the lever, and the availability of food. This means that the rat can learn that food is available, even when it’s not hungry.
Where reinforcement is relevant to PCT is in the strength of the linkage and in the likelihood of its being recorded. If the rat is hungry, then the linkage is more salient, and more likely to be learned.
Notice though, that again the animal’s internal state is of primary importance, not the stimulus/response. In a sense, you could say that you can teach an animal that a stimulus and response are paired, but this isn’t the same as making the animal behave. If we starved you and made you press a lever for your food, you might do it, or you might tell us to fork off. Yet, we don’t claim that you haven’t learned that pressing the lever leads to food in that case.
(As Richard says, it’s well established that you can torture living creatures until they accede to your demands, but it won’t necessarily tell you much about how the creature normally works.)
In any case, PCT allows for the possibility of learning without “reinforcement” in the behaviorist sense, unless you torture the definition of reinforcement to the point that anything is reinforcement.
Regarding the leptin/ghrelin question, my understanding is that PCT as a psych-physical model primarily addresses those perceptual variables that are modeled by neural analog—i.e., an analog level maintained in a neural delay loop. While Powers makes many references to other sorts of negative feedback loops in various organisms from cats to E. coli, the main thrust of his initial book deals with building up a model of what’s going on, feedback-loopwise, in the nervous system and brain, not the body’s endocrine systems.
To put it another way, PCT doesn’t say that control systems are universal, only that they are ubiquitous, and that the bulk of organisms’ neural systems are assembled from a relatively small number of distinct component types that closely resemble the sort of components that humans use when building machinery.
IOW, we should not expect that PCT’s model of neural control systems would be directly applicable to a hormone level issue. However, we can reason from general principles and say that one difference between a PCT model of the leptin/ghrelin question is that PCT includes an explicit model of hierarchy and conflict in control networks, so that we can answer questions about what happens if both leptin and ghrelin are present (for example).
If those signals are at the same level of control hierarchy, we can expect conflict to result in oscillation, where the system alternates between trying to satisfy one or the other. Or, if they’re at different levels of hierarchy, then we can expect one to override the other.
But, unlike a behavioral model where the question of precedence between different stimuli and contexts is open to interpretation, PCT makes some testable predictions about what actually constitutes hierarchy, both in terms of expected behavior, and in terms of the physical structure of the underlying control circuitry.
That is, if you could dissect an organism and find the neurons, PCT predicts a certain type of wiring to exist, i.e., that a dominant controller will have wiring to set the reference levels for lower-level controllers, but not vice-versa.
Second, PCT predicts that a dominant perception must be measured at a longer time scale than a dominated one. That is, the lower-level perception must have a higher sampling rate than the higher-level perception. Thus, for example, as a rat becomes hungrier (a longer-term perceptual variable), its likelihood of pressing a lever to receive food in spite of a shock is increased.
AFAICT, behaviorism can “explain” results like these, but does not actually predict them, in the sense that PCT is spelling out implementation-level details that behaviorism leaves to hand-waving. IOW, PCT is considerably more falsifiable than behaviorism, at least in principle. Eventually, PCT’s remaining predictions (i.e., the ones that haven’t already panned out at the anatomical level) will either be proven or disproven, while behaviorism doesn’t really make anatomical predictions about these matters.
The PCT learning model doesn’t require reinforcement at the control level, as its model of memory is a mapping from reference levels to predicted levels of other variables. I.e., when the rat notices that the lever-pressing is paired with food, a link is made between two perceptual variables: the position of the lever, and the availability of food. This means that the rat can learn that food is available, even when it’s not hungry.
Where reinforcement is relevant to PCT is in the strength of the linkage and in the likelihood of its being recorded. If the rat is hungry, then the linkage is more salient, and more likely to be learned.
Notice though, that again the animal’s internal state is of primary importance, not the stimulus/response. In a sense, you could say that you can teach an animal that a stimulus and response are paired, but this isn’t the same as making the animal behave. If we starved you and made you press a lever for your food, you might do it, or you might tell us to fork off. Yet, we don’t claim that you haven’t learned that pressing the lever leads to food in that case.
(As Richard says, it’s well established that you can torture living creatures until they accede to your demands, but it won’t necessarily tell you much about how the creature normally works.)
In any case, PCT allows for the possibility of learning without “reinforcement” in the behaviorist sense, unless you torture the definition of reinforcement to the point that anything is reinforcement.
Regarding the leptin/ghrelin question, my understanding is that PCT as a psych-physical model primarily addresses those perceptual variables that are modeled by neural analog—i.e., an analog level maintained in a neural delay loop. While Powers makes many references to other sorts of negative feedback loops in various organisms from cats to E. coli, the main thrust of his initial book deals with building up a model of what’s going on, feedback-loopwise, in the nervous system and brain, not the body’s endocrine systems.
To put it another way, PCT doesn’t say that control systems are universal, only that they are ubiquitous, and that the bulk of organisms’ neural systems are assembled from a relatively small number of distinct component types that closely resemble the sort of components that humans use when building machinery.
IOW, we should not expect that PCT’s model of neural control systems would be directly applicable to a hormone level issue. However, we can reason from general principles and say that one difference between a PCT model of the leptin/ghrelin question is that PCT includes an explicit model of hierarchy and conflict in control networks, so that we can answer questions about what happens if both leptin and ghrelin are present (for example).
If those signals are at the same level of control hierarchy, we can expect conflict to result in oscillation, where the system alternates between trying to satisfy one or the other. Or, if they’re at different levels of hierarchy, then we can expect one to override the other.
But, unlike a behavioral model where the question of precedence between different stimuli and contexts is open to interpretation, PCT makes some testable predictions about what actually constitutes hierarchy, both in terms of expected behavior, and in terms of the physical structure of the underlying control circuitry.
That is, if you could dissect an organism and find the neurons, PCT predicts a certain type of wiring to exist, i.e., that a dominant controller will have wiring to set the reference levels for lower-level controllers, but not vice-versa.
Second, PCT predicts that a dominant perception must be measured at a longer time scale than a dominated one. That is, the lower-level perception must have a higher sampling rate than the higher-level perception. Thus, for example, as a rat becomes hungrier (a longer-term perceptual variable), its likelihood of pressing a lever to receive food in spite of a shock is increased.
AFAICT, behaviorism can “explain” results like these, but does not actually predict them, in the sense that PCT is spelling out implementation-level details that behaviorism leaves to hand-waving. IOW, PCT is considerably more falsifiable than behaviorism, at least in principle. Eventually, PCT’s remaining predictions (i.e., the ones that haven’t already panned out at the anatomical level) will either be proven or disproven, while behaviorism doesn’t really make anatomical predictions about these matters.