A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Yes, that’s what I already quoted. But earlier in the same comment you said this:
It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility).
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.
Hang on, a moment ago they were functions from outputs to values. Now they’re functions from inputs to values. Which are they?
Gonna take a wild stab:
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs.
Yes, that’s what I already quoted. But earlier in the same comment you said this:
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.