My first reaction is that for the system to accomplish its goal, it must eventually behave goal-directedly. It’s easy to imagine an AI accomplishing goal X by pursuing goal Y (for example building a robot to do X), but it’s hard to imagine the AI accomplishing goal X by not accomplishing any goal.
As for your proposed answers:
I guess this is a probabilistic take on my argument that it will eventually need to be goal-directed to do things
I would say more “constraints on how well the actual goal must be accomplished”
That’s true, but I’m not sure yet that we can detect deception.
Interesting points!
My first reaction is that for the system to accomplish its goal, it must eventually behave goal-directedly. It’s easy to imagine an AI accomplishing goal X by pursuing goal Y (for example building a robot to do X), but it’s hard to imagine the AI accomplishing goal X by not accomplishing any goal.
As for your proposed answers:
I guess this is a probabilistic take on my argument that it will eventually need to be goal-directed to do things
I would say more “constraints on how well the actual goal must be accomplished”
That’s true, but I’m not sure yet that we can detect deception.
Basically my intuition.