I usually think that logic-based reasoning systems are the canonical example of of an AI without goal-directed behaviour.
Yeah, that seems right to me. Though it’s not clear how you’d use a logic-based reasoning system to act in the world—if you do that by asking the question “what action would lead to the maximum value of this function”, which it then computes using logic-based reasoning, then the resulting behavior would be goal-directed.
I’m fairly sure you can specify the behaviour of _anything_
Ah, that’s good. I should probably read the rest of the sequence too.
Though it’s not clear how you’d use a logic-based reasoning system to act in the world
The easy way to use them would be as they are intended: oracles that will answer questions about factual statements. Humans would still do the questioning and implementing here. It’s unclear how exactly you’d ask really complicated, natural-language-based questions (obviously, otherwise we’d have solved AI), but I think it serves as an example of the paradigm.
Yeah, that seems right to me. Though it’s not clear how you’d use a logic-based reasoning system to act in the world—if you do that by asking the question “what action would lead to the maximum value of this function”, which it then computes using logic-based reasoning, then the resulting behavior would be goal-directed.
Yup. I actually made this argument two posts ago.
Ah, that’s good. I should probably read the rest of the sequence too.
The easy way to use them would be as they are intended: oracles that will answer questions about factual statements. Humans would still do the questioning and implementing here. It’s unclear how exactly you’d ask really complicated, natural-language-based questions (obviously, otherwise we’d have solved AI), but I think it serves as an example of the paradigm.