My impression is that this is not how AI researchers use the word “goal.” The kind of agent you’re describing is a “reflex agent”: it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.
Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I’m fine with choosing it and going with that—avoiding definitional squabbles—but it wasn’t clear prima facie (hence the grandparent).
My impression is that this is not how AI researchers use the word “goal.” The kind of agent you’re describing is a “reflex agent”: it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.
Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I’m fine with choosing it and going with that—avoiding definitional squabbles—but it wasn’t clear prima facie (hence the grandparent).