Do goals always have to be consciously chosen? When you have simple if-then clauses, such as “if (stimulusOnLips) then StartSuckling()”, doesn’t that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as “maintain the body in working order”. Does that count?
I can see “goal” being sensibly defined either way, so it may be best not to insist on “must be consciously formulated” for the purposes of this post, then move on.
My impression is that this is not how AI researchers use the word “goal.” The kind of agent you’re describing is a “reflex agent”: it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.
Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I’m fine with choosing it and going with that—avoiding definitional squabbles—but it wasn’t clear prima facie (hence the grandparent).
No, they don’t have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say “describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings”). Goals are necessary but not sufficient for intelligence.
Intelligence is a spectrum, not either/or—a newborn baby is about as intelligent as some mammals. Although it doesn’t have any concious goals, its behaviour (hungry → cry, nipple → suck) can be explained in terms of it having the goal of staying alive.
A sleeping person—I didn’t actually think of that. What do you think?
Hmm, I feel like I should have made clearer that post is just a high-level summary of what I wrote on my blog. Seriously people, read the full post if you have time, I explain stuff in quite a bit more depth.
Given your lack of clear definitions for the terms you use (and the definitions you do have are quite circular), here or on your blog, spending more time on it is not likely to be of value.
Is a newborn baby human, or a human of any age who is asleep, intelligent by this definition?
Do goals always have to be consciously chosen? When you have simple if-then clauses, such as “if (stimulusOnLips) then StartSuckling()”, doesn’t that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as “maintain the body in working order”. Does that count?
I can see “goal” being sensibly defined either way, so it may be best not to insist on “must be consciously formulated” for the purposes of this post, then move on.
My impression is that this is not how AI researchers use the word “goal.” The kind of agent you’re describing is a “reflex agent”: it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.
Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I’m fine with choosing it and going with that—avoiding definitional squabbles—but it wasn’t clear prima facie (hence the grandparent).
No, they don’t have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say “describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings”). Goals are necessary but not sufficient for intelligence.
Which answers Trevor’s initial question.
Intelligence is a spectrum, not either/or—a newborn baby is about as intelligent as some mammals. Although it doesn’t have any concious goals, its behaviour (hungry → cry, nipple → suck) can be explained in terms of it having the goal of staying alive.
A sleeping person—I didn’t actually think of that. What do you think?
Hmm, I feel like I should have made clearer that post is just a high-level summary of what I wrote on my blog. Seriously people, read the full post if you have time, I explain stuff in quite a bit more depth.
Given your lack of clear definitions for the terms you use (and the definitions you do have are quite circular), here or on your blog, spending more time on it is not likely to be of value.