Although I find PCT intriguing, all the examples of it I’ve found have been about simple motor tasks. I can take a guess at how you might use the Method of Levels to explain larger-level decisions like which candidate to vote for, or whether to take more heroin, but it seems hokey, I haven’t seen any reputable studies conducted at this level (except one, which claimed to have found against it) and the theory seems philosophically opposed to conducting them (they claim that “statistical tests are of no use in the study of living control systems”, which raises a red flag large enough to cover a small city)
I’ve found behaviorism much more useful for modeling the things I want to model; I’ve read the PCT arguments against behaviorism and they seem ill-founded—for example, they note that animals sometimes auto-learn and behaviorist methodological insistence on external stimuli shouldn’t allow that, but once we relax the methodological restrictions, this seems to be a case of surprise serving the same function as negative reinforcement, something which is so well understood that neuroscientists can even point to the exact neurons in charge of it.
Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
By my definition of goal, if the robot’s goal were to minimize its perception of blue, it would shoot the laser exactly once—at its own visual apparatus—then remain immobile until turned off.
By my definition of goal, if the robot’s goal were to minimize its perception of blue, it would shoot the laser exactly once—at its own visual apparatus—then remain immobile until turned off.
Ironically, quite a lot of human beings goals would be more easily met in such a way, and yet we still go around shooting our lasers at blue things, metaphorically speaking.
Or, more to the point, systems need not efficiently work towards their goals’ fulfillment.
In any case, your comments just highlight yet again the fact that goals are in the eye of the beholder. The robot is what it is and does what it does, no matter what stories our brains make up to explain it.
(We could then go on to say that our brains have a goal of ascribing goals to things that appear to be operating of their own accord, but this is just doing more of the same thing.)
Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
Can you spell out the philosophical definition? My previous comment, which I posted before reading this, made only a vague guess at the concept you had in mind: “this sort of conscious, reflective, adaptive attempt to achieve what we ‘really’ want”.
I think we agree, especially when you use the word “reflective”. As opposed to, say, a reflex, which is an unconscious, nonreflective effort to acheive something which evolution or our designers decided to “want” for us. When the robot’s reflection that shooting the hologram projector instead of the hologram fails to motivate it to do so, I start doubting its behaviors are goal-driven, and suspecting they’re reflexive.
Although I find PCT intriguing, all the examples of it I’ve found have been about simple motor tasks. I can take a guess at how you might use the Method of Levels to explain larger-level decisions like which candidate to vote for, or whether to take more heroin, but it seems hokey, I haven’t seen any reputable studies conducted at this level (except one, which claimed to have found against it) and the theory seems philosophically opposed to conducting them (they claim that “statistical tests are of no use in the study of living control systems”, which raises a red flag large enough to cover a small city)
I’ve found behaviorism much more useful for modeling the things I want to model; I’ve read the PCT arguments against behaviorism and they seem ill-founded—for example, they note that animals sometimes auto-learn and behaviorist methodological insistence on external stimuli shouldn’t allow that, but once we relax the methodological restrictions, this seems to be a case of surprise serving the same function as negative reinforcement, something which is so well understood that neuroscientists can even point to the exact neurons in charge of it.
Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
By my definition of goal, if the robot’s goal were to minimize its perception of blue, it would shoot the laser exactly once—at its own visual apparatus—then remain immobile until turned off.
Ironically, quite a lot of human beings goals would be more easily met in such a way, and yet we still go around shooting our lasers at blue things, metaphorically speaking.
Or, more to the point, systems need not efficiently work towards their goals’ fulfillment.
In any case, your comments just highlight yet again the fact that goals are in the eye of the beholder. The robot is what it is and does what it does, no matter what stories our brains make up to explain it.
(We could then go on to say that our brains have a goal of ascribing goals to things that appear to be operating of their own accord, but this is just doing more of the same thing.)
Can you spell out the philosophical definition? My previous comment, which I posted before reading this, made only a vague guess at the concept you had in mind: “this sort of conscious, reflective, adaptive attempt to achieve what we ‘really’ want”.
I think we agree, especially when you use the word “reflective”. As opposed to, say, a reflex, which is an unconscious, nonreflective effort to acheive something which evolution or our designers decided to “want” for us. When the robot’s reflection that shooting the hologram projector instead of the hologram fails to motivate it to do so, I start doubting its behaviors are goal-driven, and suspecting they’re reflexive.