Current systems don’t have a goal slot, but neither are they agentic enough to be really useful. An explicit goal slot is highly useful when carrying out complex tasks that have subgoals. Humans definitely functionally have a “goal slot” although the way goals are selected and implemented is complex.
And it’s trivial to add a goal slot; with a highly intelligent LLM, one prompt called repeatedly will do:
Act as a helpful assistant carrying out the user’s instructions as they were intended. Use these tools to gather information, including clarifying instructions, and take action as necessary [tool descriptions and APIs].
While the bitter lesson is somewhat relevant, selecting and interpreting goals seems likely to be the core consideration once we expand current network AI into more useful (and dangerous) agents.
Current systems don’t have a goal slot, but neither are they agentic enough to be really useful. An explicit goal slot is highly useful when carrying out complex tasks that have subgoals. Humans definitely functionally have a “goal slot” although the way goals are selected and implemented is complex.
And it’s trivial to add a goal slot; with a highly intelligent LLM, one prompt called repeatedly will do:
Nonetheless, the bitter lesson is relevant: it should help to carefully choose the training set for the LLM “thought production”, as described in A “Bitter Lesson” Approach to Aligning AGI and ASI.
While the bitter lesson is somewhat relevant, selecting and interpreting goals seems likely to be the core consideration once we expand current network AI into more useful (and dangerous) agents.