Richard’s PCT-based definition of goal is very different from mine, and although it’s easily applicable to things like controlling eye movements, it doesn’t have the same properties as the philosophical definition of “goal”, the one that’s applicable when you’re reading all the SIAI work about AI goals and goal-directed behavior and such.
Can you spell out the philosophical definition? My previous comment, which I posted before reading this, made only a vague guess at the concept you had in mind: “this sort of conscious, reflective, adaptive attempt to achieve what we ‘really’ want”.
I think we agree, especially when you use the word “reflective”. As opposed to, say, a reflex, which is an unconscious, nonreflective effort to acheive something which evolution or our designers decided to “want” for us. When the robot’s reflection that shooting the hologram projector instead of the hologram fails to motivate it to do so, I start doubting its behaviors are goal-driven, and suspecting they’re reflexive.
Can you spell out the philosophical definition? My previous comment, which I posted before reading this, made only a vague guess at the concept you had in mind: “this sort of conscious, reflective, adaptive attempt to achieve what we ‘really’ want”.
I think we agree, especially when you use the word “reflective”. As opposed to, say, a reflex, which is an unconscious, nonreflective effort to acheive something which evolution or our designers decided to “want” for us. When the robot’s reflection that shooting the hologram projector instead of the hologram fails to motivate it to do so, I start doubting its behaviors are goal-driven, and suspecting they’re reflexive.