I pretty-much agree with the spirit of the Omohundro quote. It usually helps you meet your goals if you know what they are. That’s unlikely to be a feature specific to humans, and it is likely to apply to goal-directed agents above a certain threshold. (too-simple agents may not get much out of it). Of course, agents might start out with a clear representation of their goals—but if they don’t, they are likely to want one, as a basic component of the task of modelling themselves.
Yeah, that’s why I called it “anthropomorphizing” in the post. It’s always been a strikingly unsuccessful way to make predictions about computers.
I pretty-much agree with the spirit of the Omohundro quote. It usually helps you meet your goals if you know what they are. That’s unlikely to be a feature specific to humans, and it is likely to apply to goal-directed agents above a certain threshold. (too-simple agents may not get much out of it). Of course, agents might start out with a clear representation of their goals—but if they don’t, they are likely to want one, as a basic component of the task of modelling themselves.