To the extent that intelligence solves problems, then yes, problem-solving modules of an intelligent entity have at least short-term goals. But whether the entity itself has goals is a different question.
I can imagine a sufficiently dangerous uFAI having no consistent goals we would recognise, and all of its problem-solving behaviour, however powerful and adaptable, being at a level we wouldn’t call intelligence (e.g. the uFAI could include a strategic module like a super-powerful but clearly non-sentient chess computer, which would not in itself have any awareness or intent, just problem-solving behaviour.)
Actually, I’m not going to disagree with you about definitions of intelligence. But I suspect most of them are place-holders until we understand enough to dissolve the question “what is intelligence?” a bit.
To the extent that intelligence solves problems, then yes, problem-solving modules of an intelligent entity have at least short-term goals. But whether the entity itself has goals is a different question.
I can imagine a sufficiently dangerous uFAI having no consistent goals we would recognise, and all of its problem-solving behaviour, however powerful and adaptable, being at a level we wouldn’t call intelligence (e.g. the uFAI could include a strategic module like a super-powerful but clearly non-sentient chess computer, which would not in itself have any awareness or intent, just problem-solving behaviour.)
Actually, I’m not going to disagree with you about definitions of intelligence. But I suspect most of them are place-holders until we understand enough to dissolve the question “what is intelligence?” a bit.