To me the very notion of an AI system not having any goals at all seems inherently wrong. If the system is doing something—even if that something is just reasoning—the system must have some means of deciding what to do out of the infinite pool of things that could possibly be done. Whatever that system is defines the goal.
Goal-directed behaviour can be as simple as what a central heating thermostate does. An AI could very possibly have no internal representation of what its own goal is, but if it is carrying out computations, it almost certainly has something which directs it on what sort of computations it’s expected to carry out, and that is quite enough to define a goal for it.
To me the very notion of an AI system not having any goals at all seems inherently wrong. If the system is doing something—even if that something is just reasoning—the system must have some means of deciding what to do out of the infinite pool of things that could possibly be done. Whatever that system is defines the goal.
Goal-directed behaviour can be as simple as what a central heating thermostate does. An AI could very possibly have no internal representation of what its own goal is, but if it is carrying out computations, it almost certainly has something which directs it on what sort of computations it’s expected to carry out, and that is quite enough to define a goal for it.