Do goals always have to be consciously chosen? When you have simple if-then clauses, such as “if (stimulusOnLips) then StartSuckling()”, doesn’t that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as “maintain the body in working order”. Does that count?
I can see “goal” being sensibly defined either way, so it may be best not to insist on “must be consciously formulated” for the purposes of this post, then move on.
My impression is that this is not how AI researchers use the word “goal.” The kind of agent you’re describing is a “reflex agent”: it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.
Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I’m fine with choosing it and going with that—avoiding definitional squabbles—but it wasn’t clear prima facie (hence the grandparent).
No, they don’t have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say “describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings”). Goals are necessary but not sufficient for intelligence.
Do goals always have to be consciously chosen? When you have simple if-then clauses, such as “if (stimulusOnLips) then StartSuckling()”, doesn’t that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as “maintain the body in working order”. Does that count?
I can see “goal” being sensibly defined either way, so it may be best not to insist on “must be consciously formulated” for the purposes of this post, then move on.
My impression is that this is not how AI researchers use the word “goal.” The kind of agent you’re describing is a “reflex agent”: it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.
Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I’m fine with choosing it and going with that—avoiding definitional squabbles—but it wasn’t clear prima facie (hence the grandparent).
No, they don’t have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say “describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings”). Goals are necessary but not sufficient for intelligence.
Which answers Trevor’s initial question.