FWIW, my enthusiasm over PCT has cooled considerably. Not because it’s not true, just because it’s gone from “OMG this explains everything” to just “how things work”.
I’m agreed with Kennaway on this.
It’s a useful intuition pump for lots of things, not the least of which is the reason humans are primarily satisficers, and make pretty crappy maximizers.
Technically, I disagree, because I want ‘satisficer’ to keep the original intended sense of “get X to at least this particular threshold value, and then don’t worry about getting it any higher.” I think controls point at… something I don’t have a good word for yet, but ‘proportioners’ that try to put in effort appropriate to the level of error.
(An aside: I was at the AAAI workshop on AI and Ethics yesterday, and someone shared the story of telling people about their simulated system which proved statements like “if a person is about to fall in the hole, and the robot can find a plan that saves them, then the person never falls into the hole,” and had their previous audience respond to this with “well, why doesn’t the robot try to save someone even if they know they won’t succeed?”. This is ridiculous in the ‘maximizer’ model and the ‘satisficer’ model, but makes sense in the ‘proportioner’ model—if something needs to be done, then you need to try, because the effort is more important than the effect.)
want ‘satisficer’ to keep the original intended sense of “get X to at least this particular threshold value, and then don’t worry about getting it any higher.” I think controls point at… something I don’t have a good word for yet, but ‘proportioners’ that try to put in effort appropriate to the level of error.
And yet, that’s what they do. I mean, get X to a threshold value. It’s just that X is the “distance to desired value”, and we’re trying to reduce X rather than increase it. Where things get interesting is that the system is simultaneously doing this for a lot of different perceptions, like keeping effort expenditure proportionate to reward.
if something needs to be done, then you need to try, because the effort is more important than the effect.
I don’t understand this. People put forth effort in such a situation for various reasons, such as:
Lack of absolute certainty the attempt will fail
Embarassment at not being seen to try
Belief they would be bad if they don’t try
etc. It’s not about “effort” or “effect” or maximizing or satisficing per se. It’s just acting to reduce disturbances in current and predicted perceptions. Creating a new “proportioner” concept doesn’t make sense to me, as there don’t seem to be any leftover things to explain. It’s enough to consider that living beings are simultaneously seeking homeostasis across a wide variety of present and predicted perceptual variables. (Including very abstract ones like “self-esteem” or “social status”.)
Thinking about it more, maybe I should just use “controller” to point at what I want to point at, but the issue is that is a normal English word with many more implications than I want.
Creating a new “proportioner” concept doesn’t make sense to me, as there don’t seem to be any leftover things to explain.
Mathematically, there definitely is. That is, consider the following descriptions of one-dimensional systems (all of which are a bit too short to be formal, but I don’t feel like doing all the TeX necessary to make it formal and pretty):
max x s.t. x=f(u)
min u s.t. x≥x_min, x=f(u)
u=-k*e, e=x-x_ref, y=f(u,x)
The first is a maximizer that tries to get x as high as possible, the second is a lazy satisficer that tries to do as little as possible while getting x above some threshold (in general, a satisficer just cares about hitting the threshold and not effort spent), the third is a simple negative feedback controller and it behaves differently from the maximizer and from the satisficer (approaching the reference asymptotically, reducing the control effort as the disturbance decreases).
My suspicion is that typically, when people talk about satisficers, they have something closer to 3 than 2 in mind. That is...
It’s just acting to reduce disturbances in current and predicted perceptions.
Agreed. But that’s not what a satisficer does (in the original meaning of the term).
I’m agreed with Kennaway on this.
Technically, I disagree, because I want ‘satisficer’ to keep the original intended sense of “get X to at least this particular threshold value, and then don’t worry about getting it any higher.” I think controls point at… something I don’t have a good word for yet, but ‘proportioners’ that try to put in effort appropriate to the level of error.
(An aside: I was at the AAAI workshop on AI and Ethics yesterday, and someone shared the story of telling people about their simulated system which proved statements like “if a person is about to fall in the hole, and the robot can find a plan that saves them, then the person never falls into the hole,” and had their previous audience respond to this with “well, why doesn’t the robot try to save someone even if they know they won’t succeed?”. This is ridiculous in the ‘maximizer’ model and the ‘satisficer’ model, but makes sense in the ‘proportioner’ model—if something needs to be done, then you need to try, because the effort is more important than the effect.)
And yet, that’s what they do. I mean, get X to a threshold value. It’s just that X is the “distance to desired value”, and we’re trying to reduce X rather than increase it. Where things get interesting is that the system is simultaneously doing this for a lot of different perceptions, like keeping effort expenditure proportionate to reward.
I don’t understand this. People put forth effort in such a situation for various reasons, such as:
Lack of absolute certainty the attempt will fail
Embarassment at not being seen to try
Belief they would be bad if they don’t try
etc. It’s not about “effort” or “effect” or maximizing or satisficing per se. It’s just acting to reduce disturbances in current and predicted perceptions. Creating a new “proportioner” concept doesn’t make sense to me, as there don’t seem to be any leftover things to explain. It’s enough to consider that living beings are simultaneously seeking homeostasis across a wide variety of present and predicted perceptual variables. (Including very abstract ones like “self-esteem” or “social status”.)
Thinking about it more, maybe I should just use “controller” to point at what I want to point at, but the issue is that is a normal English word with many more implications than I want.
Mathematically, there definitely is. That is, consider the following descriptions of one-dimensional systems (all of which are a bit too short to be formal, but I don’t feel like doing all the TeX necessary to make it formal and pretty):
max x s.t. x=f(u)
min u s.t. x≥x_min, x=f(u)
u=-k*e, e=x-x_ref, y=f(u,x)
The first is a maximizer that tries to get x as high as possible, the second is a lazy satisficer that tries to do as little as possible while getting x above some threshold (in general, a satisficer just cares about hitting the threshold and not effort spent), the third is a simple negative feedback controller and it behaves differently from the maximizer and from the satisficer (approaching the reference asymptotically, reducing the control effort as the disturbance decreases).
My suspicion is that typically, when people talk about satisficers, they have something closer to 3 than 2 in mind. That is...
Agreed. But that’s not what a satisficer does (in the original meaning of the term).