FWIW, my enthusiasm over PCT has cooled considerably. Not because it’s not true, just because it’s gone from “OMG this explains everything” to just “how things work”. It’s a useful intuition pump for lots of things, not the least of which is the reason humans are primarily satisficers, and make pretty crappy maximizers. (To maximize, we generally need external positive feedback loops, like competition.)
(It’s also a useful tool for understanding the difference between what’s intuitive to a human and intuitive to an AI. When you tell a human, “solve this problem”, they implicitly leave all their mental thermostats to “and don’t change anything else”. Whereas a generic planning API that’s not based on a homeostatic control model implicitly considers everything up for grabs, the human has a hererarchy of controlled values that are always being kept within acceptable parameters, so we don’t e.g. go around murdering people to use their body parts for computronium to solve the problem with. This behavioral difference falls naturally out of the PCT model.)
At the same time, I have seen that not everything is a negative-feedback control loop, despite the prevalence of them. Sometimes, a human is only one part of a larger set of interactions, that can create either positive or negative feedback loops, even if individual humans are mostly composed of negative-feedback control loops.
Notably, for many biological processes, nature doesn’t bother to evolve negative control loops for things that didn’t need them in the ancestral environment, due to resource limitations or competition, etc. If this weren’t true, superstimuli couldn’t exist, because we’d experience error as the stimulus increased past the intended design range. And then we wouldn’t e.g. get hooked on fast food.
That being said, here’s an example of something “self-help useful” about the PCT model, that is not (AFAIK) predicted by any other psychological or neurological model: PCT says that a stable control system requires that higher level controls operate on longer time scales than lower ones. More precisely, a higher-level perceptual value must always a function of lower-level perceptions sampled over a longer time period than the one those lower-level perceptions are sampled on. (Which means, for example, that if you base your happiness on perceptions that are moment-to-moment rather than measured over longer periods, you’re gonna have a bad time.)
Another idea, stated in a lot of “traditional” self-help, is that you can’t get something until you can perceive what it is. Some schools treat this as a hierarchical process, and a few even treat this as a formalism, ie., that your goal is not well-formed until you can describe it in terms of what sensory evidence you would observe when the goal is reached. And even my own “desk-cleaning trick”, developed before I learned about PCT, is built on a perceptual contrast.
And speaking of contrast, the skill of “mental contrasting” is all the rage these days, and it’s also quite similar to what PCT says about perceptual contrast. (Not to mention being similar the desk-cleaning trick.)
However, there’s a slight difference between what PCT would predict as optimal contrasting, and what “mental contrasting” is. I believe that PCT would emphasize contrasting not with anticipated difficulties, but rather, with whatever the current state of reality is. As it happens, Robert Fritz’s books and creativity training workshops (developed, AFAICT independently of PCT) take this latter approach, and indeed the desk-cleaning trick was the result of me noticing that Fritz’s approach could be applied in an instantaneous manner to something rather less creative than making art or a business. (Again, prior to PCT exposure on my part.)
I would be interested to see experiments comparing “mental contrasting” as currently taught, with “structural tension” as taught by Fritz and company. I suspect that they’re not terribly different, though, because one byproduct of contrasting the goal state and current state is a sudden awareness of obstacles and/or required subgoals. So, being told to look for problems may in fact require people to implicitly perform this same comparison, and being told to do it the other way around might therefore only make a small difference.
FWIW, my enthusiasm over PCT has cooled considerably. Not because it’s not true, just because it’s gone from “OMG this explains everything” to just “how things work”.
I’m agreed with Kennaway on this.
It’s a useful intuition pump for lots of things, not the least of which is the reason humans are primarily satisficers, and make pretty crappy maximizers.
Technically, I disagree, because I want ‘satisficer’ to keep the original intended sense of “get X to at least this particular threshold value, and then don’t worry about getting it any higher.” I think controls point at… something I don’t have a good word for yet, but ‘proportioners’ that try to put in effort appropriate to the level of error.
(An aside: I was at the AAAI workshop on AI and Ethics yesterday, and someone shared the story of telling people about their simulated system which proved statements like “if a person is about to fall in the hole, and the robot can find a plan that saves them, then the person never falls into the hole,” and had their previous audience respond to this with “well, why doesn’t the robot try to save someone even if they know they won’t succeed?”. This is ridiculous in the ‘maximizer’ model and the ‘satisficer’ model, but makes sense in the ‘proportioner’ model—if something needs to be done, then you need to try, because the effort is more important than the effect.)
want ‘satisficer’ to keep the original intended sense of “get X to at least this particular threshold value, and then don’t worry about getting it any higher.” I think controls point at… something I don’t have a good word for yet, but ‘proportioners’ that try to put in effort appropriate to the level of error.
And yet, that’s what they do. I mean, get X to a threshold value. It’s just that X is the “distance to desired value”, and we’re trying to reduce X rather than increase it. Where things get interesting is that the system is simultaneously doing this for a lot of different perceptions, like keeping effort expenditure proportionate to reward.
if something needs to be done, then you need to try, because the effort is more important than the effect.
I don’t understand this. People put forth effort in such a situation for various reasons, such as:
Lack of absolute certainty the attempt will fail
Embarassment at not being seen to try
Belief they would be bad if they don’t try
etc. It’s not about “effort” or “effect” or maximizing or satisficing per se. It’s just acting to reduce disturbances in current and predicted perceptions. Creating a new “proportioner” concept doesn’t make sense to me, as there don’t seem to be any leftover things to explain. It’s enough to consider that living beings are simultaneously seeking homeostasis across a wide variety of present and predicted perceptual variables. (Including very abstract ones like “self-esteem” or “social status”.)
Thinking about it more, maybe I should just use “controller” to point at what I want to point at, but the issue is that is a normal English word with many more implications than I want.
Creating a new “proportioner” concept doesn’t make sense to me, as there don’t seem to be any leftover things to explain.
Mathematically, there definitely is. That is, consider the following descriptions of one-dimensional systems (all of which are a bit too short to be formal, but I don’t feel like doing all the TeX necessary to make it formal and pretty):
max x s.t. x=f(u)
min u s.t. x≥x_min, x=f(u)
u=-k*e, e=x-x_ref, y=f(u,x)
The first is a maximizer that tries to get x as high as possible, the second is a lazy satisficer that tries to do as little as possible while getting x above some threshold (in general, a satisficer just cares about hitting the threshold and not effort spent), the third is a simple negative feedback controller and it behaves differently from the maximizer and from the satisficer (approaching the reference asymptotically, reducing the control effort as the disturbance decreases).
My suspicion is that typically, when people talk about satisficers, they have something closer to 3 than 2 in mind. That is...
It’s just acting to reduce disturbances in current and predicted perceptions.
Agreed. But that’s not what a satisficer does (in the original meaning of the term).
humans are primarily satisficers, and make pretty crappy maximizers
Is there some reason they should be maximisers?
And then we wouldn’t e.g. get hooked on fast food.
“What do you mean by ‘we’, paleface?” :)
How do you explain why some people do not get hooked on fast food? To me, what McDonalds and similar places serve does not even count as food. It is simply not my inclination to eat such things. I don’t play computer games much either, to name another “superstimulus”. I do not click on any link entitled “10 things you must...” This isn’t the wisdom of age; the same is true of my younger adult selves (mutatis mutandis—some of those things had not been invented in those days).
Ok, that’s just me, but it’s an example I’m very familiar with, and it always feels odd to see people going on about superstimuli and losing weight and the ancestral environment and the latest pop sci fads and observe that I am mysteriously absent, despite not being any sort of alien in disguise.
FWIW, my enthusiasm over PCT has cooled considerably. Not because it’s not true, just because it’s gone from “OMG this explains everything” to just “how things work”. It’s a useful intuition pump for lots of things, not the least of which is the reason humans are primarily satisficers, and make pretty crappy maximizers. (To maximize, we generally need external positive feedback loops, like competition.)
(It’s also a useful tool for understanding the difference between what’s intuitive to a human and intuitive to an AI. When you tell a human, “solve this problem”, they implicitly leave all their mental thermostats to “and don’t change anything else”. Whereas a generic planning API that’s not based on a homeostatic control model implicitly considers everything up for grabs, the human has a hererarchy of controlled values that are always being kept within acceptable parameters, so we don’t e.g. go around murdering people to use their body parts for computronium to solve the problem with. This behavioral difference falls naturally out of the PCT model.)
At the same time, I have seen that not everything is a negative-feedback control loop, despite the prevalence of them. Sometimes, a human is only one part of a larger set of interactions, that can create either positive or negative feedback loops, even if individual humans are mostly composed of negative-feedback control loops.
Notably, for many biological processes, nature doesn’t bother to evolve negative control loops for things that didn’t need them in the ancestral environment, due to resource limitations or competition, etc. If this weren’t true, superstimuli couldn’t exist, because we’d experience error as the stimulus increased past the intended design range. And then we wouldn’t e.g. get hooked on fast food.
That being said, here’s an example of something “self-help useful” about the PCT model, that is not (AFAIK) predicted by any other psychological or neurological model: PCT says that a stable control system requires that higher level controls operate on longer time scales than lower ones. More precisely, a higher-level perceptual value must always a function of lower-level perceptions sampled over a longer time period than the one those lower-level perceptions are sampled on. (Which means, for example, that if you base your happiness on perceptions that are moment-to-moment rather than measured over longer periods, you’re gonna have a bad time.)
Another idea, stated in a lot of “traditional” self-help, is that you can’t get something until you can perceive what it is. Some schools treat this as a hierarchical process, and a few even treat this as a formalism, ie., that your goal is not well-formed until you can describe it in terms of what sensory evidence you would observe when the goal is reached. And even my own “desk-cleaning trick”, developed before I learned about PCT, is built on a perceptual contrast.
And speaking of contrast, the skill of “mental contrasting” is all the rage these days, and it’s also quite similar to what PCT says about perceptual contrast. (Not to mention being similar the desk-cleaning trick.)
However, there’s a slight difference between what PCT would predict as optimal contrasting, and what “mental contrasting” is. I believe that PCT would emphasize contrasting not with anticipated difficulties, but rather, with whatever the current state of reality is. As it happens, Robert Fritz’s books and creativity training workshops (developed, AFAICT independently of PCT) take this latter approach, and indeed the desk-cleaning trick was the result of me noticing that Fritz’s approach could be applied in an instantaneous manner to something rather less creative than making art or a business. (Again, prior to PCT exposure on my part.)
I would be interested to see experiments comparing “mental contrasting” as currently taught, with “structural tension” as taught by Fritz and company. I suspect that they’re not terribly different, though, because one byproduct of contrasting the goal state and current state is a sudden awareness of obstacles and/or required subgoals. So, being told to look for problems may in fact require people to implicitly perform this same comparison, and being told to do it the other way around might therefore only make a small difference.
This is high praise.
I’m agreed with Kennaway on this.
Technically, I disagree, because I want ‘satisficer’ to keep the original intended sense of “get X to at least this particular threshold value, and then don’t worry about getting it any higher.” I think controls point at… something I don’t have a good word for yet, but ‘proportioners’ that try to put in effort appropriate to the level of error.
(An aside: I was at the AAAI workshop on AI and Ethics yesterday, and someone shared the story of telling people about their simulated system which proved statements like “if a person is about to fall in the hole, and the robot can find a plan that saves them, then the person never falls into the hole,” and had their previous audience respond to this with “well, why doesn’t the robot try to save someone even if they know they won’t succeed?”. This is ridiculous in the ‘maximizer’ model and the ‘satisficer’ model, but makes sense in the ‘proportioner’ model—if something needs to be done, then you need to try, because the effort is more important than the effect.)
And yet, that’s what they do. I mean, get X to a threshold value. It’s just that X is the “distance to desired value”, and we’re trying to reduce X rather than increase it. Where things get interesting is that the system is simultaneously doing this for a lot of different perceptions, like keeping effort expenditure proportionate to reward.
I don’t understand this. People put forth effort in such a situation for various reasons, such as:
Lack of absolute certainty the attempt will fail
Embarassment at not being seen to try
Belief they would be bad if they don’t try
etc. It’s not about “effort” or “effect” or maximizing or satisficing per se. It’s just acting to reduce disturbances in current and predicted perceptions. Creating a new “proportioner” concept doesn’t make sense to me, as there don’t seem to be any leftover things to explain. It’s enough to consider that living beings are simultaneously seeking homeostasis across a wide variety of present and predicted perceptual variables. (Including very abstract ones like “self-esteem” or “social status”.)
Thinking about it more, maybe I should just use “controller” to point at what I want to point at, but the issue is that is a normal English word with many more implications than I want.
Mathematically, there definitely is. That is, consider the following descriptions of one-dimensional systems (all of which are a bit too short to be formal, but I don’t feel like doing all the TeX necessary to make it formal and pretty):
max x s.t. x=f(u)
min u s.t. x≥x_min, x=f(u)
u=-k*e, e=x-x_ref, y=f(u,x)
The first is a maximizer that tries to get x as high as possible, the second is a lazy satisficer that tries to do as little as possible while getting x above some threshold (in general, a satisficer just cares about hitting the threshold and not effort spent), the third is a simple negative feedback controller and it behaves differently from the maximizer and from the satisficer (approaching the reference asymptotically, reducing the control effort as the disturbance decreases).
My suspicion is that typically, when people talk about satisficers, they have something closer to 3 than 2 in mind. That is...
Agreed. But that’s not what a satisficer does (in the original meaning of the term).
Is there some reason they should be maximisers?
“What do you mean by ‘we’, paleface?” :)
How do you explain why some people do not get hooked on fast food? To me, what McDonalds and similar places serve does not even count as food. It is simply not my inclination to eat such things. I don’t play computer games much either, to name another “superstimulus”. I do not click on any link entitled “10 things you must...” This isn’t the wisdom of age; the same is true of my younger adult selves (mutatis mutandis—some of those things had not been invented in those days).
Ok, that’s just me, but it’s an example I’m very familiar with, and it always feels odd to see people going on about superstimuli and losing weight and the ancestral environment and the latest pop sci fads and observe that I am mysteriously absent, despite not being any sort of alien in disguise.