This sounds more like a tool AI! I thought that agent AIs generally had more persistent utility measures—this looks like the sort of thing where the AI has NO utility maximizing behavior until a problem is presented, then temporarily instantiates a problem-specific utility function (like the above).
This sounds more like a tool AI! I thought that agent AIs generally had more persistent utility measures—this looks like the sort of thing where the AI has NO utility maximizing behavior until a problem is presented, then temporarily instantiates a problem-specific utility function (like the above).
Well, yes, it is a tool AI. But it does have an utility function, it can be built upon a decision theory, etc, and in this sense, it is Agent.