What is the robot’s goal? To follow the program detailed in the first paragraph?
I suspect Richard would say that the robot’s goal is minimizing its perception of blue. That’s the PCT perspective on the behavior of biological systems in such scenarios.
However, I’m not sure this description actually applies to the robot, since the program was specified as “scan and shoot”, not “notice when there’s too much blue and get rid of it.”. In observed biological systems, goals are typically expressed as perception-based negative feedback loops implemented in hardware, rather than purely rote programs OR high-level software algorithms. But without more details of the robot’s design, it’s hard to say whether it really meets the PCT criterion for goals.
Of course, from a certain perspective, you could say at a high level that the robot’s behavior is as if it had a goal of minimizing its perception of blue. But as your post points out, this idea is in the mind of the beholder, not in the robot. I would go further as to say that all such labeling of things as goals occurs in the minds of observers, regardless of how complex or simple the biological, mechanical, electronic, or other source of behavior is.
I suspect Richard would say that the robot’s goal is minimizing its perception of blue. That’s the PCT perspective on the behavior of biological systems in such scenarios.
This ‘minimization’ goal would require a brain that is powerful enough to believe that lasers destroy or discolor what they hit.
If this post were read by blue aliens that thrive on laser energy, they’d wonder they we were so confused as to the purpose of a automatic baby feeder.
This ‘minimization’ goal would require a brain that is powerful enough to believe that lasers destroy or discolor what they hit.
From the PCT perspective, the goal of an E. coli bacterium swimming away from toxins and towards food is to keep its perceptions within certain ranges; this doesn’t require a brain of any sort at all.
What requires a brain is for an outside observer to ascribe goals to a system. For example, we ascribe a thermostat’s goal to be to keep the temperature in a certain range. This does not require that the thermostat itself be aware of this goal.
< If this post were read by blue aliens that thrive on laser energy, they’d wonder they we were so confused as to the purpose of a automatic baby feeder.
Although I find PCT intriguing, all the examples of it I’ve found have been about simple motor tasks. I can take a guess at how you might use the Method of Levels to explain larger-level decisions like which candidate to vote for, or whether to take more heroin, but it seems hokey, I haven’t seen any reputable studies conducted at this level (except one, which claimed to have found against it) and the theory seems philosophically opposed to conducting them (they claim that “statistical tests are of no use in the study of living control systems”, which raises a red flag large enough to cover a small city)
I’ve found behaviorism much more useful for modeling the things I want to model; I’ve read the PCT arguments against behaviorism and they seem ill-founded—for example, they note that animals sometimes auto-learn and behaviorist methodological insistence on external stimuli shouldn’t allow that, but once we relax the methodological restrictions, this seems to be a case of surprise serving the same function as negative reinforcement, something which is so well understood that neuroscientists can even point to the exact neurons in charge of it.
Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
By my definition of goal, if the robot’s goal were to minimize its perception of blue, it would shoot the laser exactly once—at its own visual apparatus—then remain immobile until turned off.
By my definition of goal, if the robot’s goal were to minimize its perception of blue, it would shoot the laser exactly once—at its own visual apparatus—then remain immobile until turned off.
Ironically, quite a lot of human beings goals would be more easily met in such a way, and yet we still go around shooting our lasers at blue things, metaphorically speaking.
Or, more to the point, systems need not efficiently work towards their goals’ fulfillment.
In any case, your comments just highlight yet again the fact that goals are in the eye of the beholder. The robot is what it is and does what it does, no matter what stories our brains make up to explain it.
(We could then go on to say that our brains have a goal of ascribing goals to things that appear to be operating of their own accord, but this is just doing more of the same thing.)
Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
Can you spell out the philosophical definition? My previous comment, which I posted before reading this, made only a vague guess at the concept you had in mind: “this sort of conscious, reflective, adaptive attempt to achieve what we ‘really’ want”.
I think we agree, especially when you use the word “reflective”. As opposed to, say, a reflex, which is an unconscious, nonreflective effort to acheive something which evolution or our designers decided to “want” for us. When the robot’s reflection that shooting the hologram projector instead of the hologram fails to motivate it to do so, I start doubting its behaviors are goal-driven, and suspecting they’re reflexive.
Which linking I don’t mind a bit, since you’re effectively linking to my reply as well, which is then followed by your hasty departure from the thread with a claim that you’d answer my other points “later”… with no further comment for just under two years. Guess it’s not “later” yet.. ;-)
(Also, anyone who cares to read upthread from that link can see where I agreed with you about Marken’s paper, or how much time it took me to get you to state your “true rejection” before you dropped out of the discussion. AFAICT, you were only having the discussion so you could find ammunition for a conclusion you’d reached long before that point.)
You also seem to have the mistaken notion that I’m an idea partisan, i.e., that because I say an idea has some merit or that it isn’t completely worthless, that this means I’m an official spokesperson for that idea as well, and therefore am an Evil Outsider to be attacked.
Well, I’m not, and you’re being rude. Not only to me, but to everyone in the thread who’s now had to listen to both your petty hit-and-run pa(troll)ing, and to me replying.
So, I’m out of here (the subthread), but I won’t be coming back later to address any missed points, since the burden is still on you to actually address any of the many, MANY questions I asked you in that two-year-old thread, for which you still have yet to offer any reply, AFAICT.
I entered that discussion with a willingness to change my mind, but from the evidence at hand, it seems you did not.
(Note: if you do wish to have an intelligent discussion on the topic, you may reach me via the old thread. I’m pre-committing not to reply to you in this one, where you can indulge your obvious desire to score points off an audience, vs. actually discussing anything.)
(Note: if you do wish to have an intelligent discussion on the topic, you may reach me via the old thread. I’m pre-committing not to reply to you in this one, where you can indulge your obvious desire to score points off an audience, vs. actually discussing anything.)
Thanks for the poisoned well, but I don’t intend to abuse the last word. I think more highly of you now than I did when we had our prior altercation, but it remains true that I’ve seen zero experimental evidence for PCT in a cognitive context, and that Marken’s paper is an absolute mathematical sham. There may be valid aspects to PCT, but it hasn’t yet justified its use as a cognitive theory, and I feel that it’s important to note this whenever it comes up on Less Wrong.
(Incidentally, the reason I trailed off in that thread is because I’d done something that in retrospect was poor form: I’d written up a full critique of the Marken paper before I asked you whether you thought it constituted experimental evidence, and I was frustrated that you didn’t walk into the trap. If we both agree that the paper is pseudoscience, though, there’s nothing left to add.)
P.S. I don’t doubt that you’ve had success working with people through a PCT framework, but I suspect that it’s a placebo effect: a sufficiently fuzzy framework gives you room to justify your (usually correct) unconscious intuitions about what’s going on, and grants it the gravitas of a deep-sounding theory. (You might do just as well if you were a Freudian.) That’s one reason why I discount anecdotal evidence of that form.
I suspect Richard would say that the robot’s goal is minimizing its perception of blue. That’s the PCT perspective on the behavior of biological systems in such scenarios.
However, I’m not sure this description actually applies to the robot, since the program was specified as “scan and shoot”, not “notice when there’s too much blue and get rid of it.”. In observed biological systems, goals are typically expressed as perception-based negative feedback loops implemented in hardware, rather than purely rote programs OR high-level software algorithms. But without more details of the robot’s design, it’s hard to say whether it really meets the PCT criterion for goals.
Of course, from a certain perspective, you could say at a high level that the robot’s behavior is as if it had a goal of minimizing its perception of blue. But as your post points out, this idea is in the mind of the beholder, not in the robot. I would go further as to say that all such labeling of things as goals occurs in the minds of observers, regardless of how complex or simple the biological, mechanical, electronic, or other source of behavior is.
This ‘minimization’ goal would require a brain that is powerful enough to believe that lasers destroy or discolor what they hit.
If this post were read by blue aliens that thrive on laser energy, they’d wonder they we were so confused as to the purpose of a automatic baby feeder.
From the PCT perspective, the goal of an E. coli bacterium swimming away from toxins and towards food is to keep its perceptions within certain ranges; this doesn’t require a brain of any sort at all.
What requires a brain is for an outside observer to ascribe goals to a system. For example, we ascribe a thermostat’s goal to be to keep the temperature in a certain range. This does not require that the thermostat itself be aware of this goal.
< If this post were read by blue aliens that thrive on laser energy, they’d wonder they we were so confused as to the purpose of a automatic baby feeder.
Clever!
Although I find PCT intriguing, all the examples of it I’ve found have been about simple motor tasks. I can take a guess at how you might use the Method of Levels to explain larger-level decisions like which candidate to vote for, or whether to take more heroin, but it seems hokey, I haven’t seen any reputable studies conducted at this level (except one, which claimed to have found against it) and the theory seems philosophically opposed to conducting them (they claim that “statistical tests are of no use in the study of living control systems”, which raises a red flag large enough to cover a small city)
I’ve found behaviorism much more useful for modeling the things I want to model; I’ve read the PCT arguments against behaviorism and they seem ill-founded—for example, they note that animals sometimes auto-learn and behaviorist methodological insistence on external stimuli shouldn’t allow that, but once we relax the methodological restrictions, this seems to be a case of surprise serving the same function as negative reinforcement, something which is so well understood that neuroscientists can even point to the exact neurons in charge of it.
Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
By my definition of goal, if the robot’s goal were to minimize its perception of blue, it would shoot the laser exactly once—at its own visual apparatus—then remain immobile until turned off.
Ironically, quite a lot of human beings goals would be more easily met in such a way, and yet we still go around shooting our lasers at blue things, metaphorically speaking.
Or, more to the point, systems need not efficiently work towards their goals’ fulfillment.
In any case, your comments just highlight yet again the fact that goals are in the eye of the beholder. The robot is what it is and does what it does, no matter what stories our brains make up to explain it.
(We could then go on to say that our brains have a goal of ascribing goals to things that appear to be operating of their own accord, but this is just doing more of the same thing.)
Can you spell out the philosophical definition? My previous comment, which I posted before reading this, made only a vague guess at the concept you had in mind: “this sort of conscious, reflective, adaptive attempt to achieve what we ‘really’ want”.
I think we agree, especially when you use the word “reflective”. As opposed to, say, a reflex, which is an unconscious, nonreflective effort to acheive something which evolution or our designers decided to “want” for us. When the robot’s reflection that shooting the hologram projector instead of the hologram fails to motivate it to do so, I start doubting its behaviors are goal-driven, and suspecting they’re reflexive.
Every time you bring up PCT, I have to bring up my reasons for concluding that it’s pseudoscience of the worst sort. (Note that this is an analysis of an experiment that PJ Eby himself picked to support his claims.)
Actually, Yvain brought it up.
Which linking I don’t mind a bit, since you’re effectively linking to my reply as well, which is then followed by your hasty departure from the thread with a claim that you’d answer my other points “later”… with no further comment for just under two years. Guess it’s not “later” yet.. ;-)
(Also, anyone who cares to read upthread from that link can see where I agreed with you about Marken’s paper, or how much time it took me to get you to state your “true rejection” before you dropped out of the discussion. AFAICT, you were only having the discussion so you could find ammunition for a conclusion you’d reached long before that point.)
You also seem to have the mistaken notion that I’m an idea partisan, i.e., that because I say an idea has some merit or that it isn’t completely worthless, that this means I’m an official spokesperson for that idea as well, and therefore am an Evil Outsider to be attacked.
Well, I’m not, and you’re being rude. Not only to me, but to everyone in the thread who’s now had to listen to both your petty hit-and-run pa(troll)ing, and to me replying.
So, I’m out of here (the subthread), but I won’t be coming back later to address any missed points, since the burden is still on you to actually address any of the many, MANY questions I asked you in that two-year-old thread, for which you still have yet to offer any reply, AFAICT.
I entered that discussion with a willingness to change my mind, but from the evidence at hand, it seems you did not.
(Note: if you do wish to have an intelligent discussion on the topic, you may reach me via the old thread. I’m pre-committing not to reply to you in this one, where you can indulge your obvious desire to score points off an audience, vs. actually discussing anything.)
Thanks for the poisoned well, but I don’t intend to abuse the last word. I think more highly of you now than I did when we had our prior altercation, but it remains true that I’ve seen zero experimental evidence for PCT in a cognitive context, and that Marken’s paper is an absolute mathematical sham. There may be valid aspects to PCT, but it hasn’t yet justified its use as a cognitive theory, and I feel that it’s important to note this whenever it comes up on Less Wrong.
(Incidentally, the reason I trailed off in that thread is because I’d done something that in retrospect was poor form: I’d written up a full critique of the Marken paper before I asked you whether you thought it constituted experimental evidence, and I was frustrated that you didn’t walk into the trap. If we both agree that the paper is pseudoscience, though, there’s nothing left to add.)
P.S. I don’t doubt that you’ve had success working with people through a PCT framework, but I suspect that it’s a placebo effect: a sufficiently fuzzy framework gives you room to justify your (usually correct) unconscious intuitions about what’s going on, and grants it the gravitas of a deep-sounding theory. (You might do just as well if you were a Freudian.) That’s one reason why I discount anecdotal evidence of that form.