Her formalism may be wrong—it probably is, since it’s possible to have ordinary nihilism which permits minimal self-maintenance. For that matter, those hitting bottom rituals are still goal-directed behavior.
Still, pervasive akrasia or high-lethargy depression or whatever you want to call it does happen, and I think the post is a good effort at addressing it.
I agree with you and Nancy Lebovitz that it’s not literally the case that emotional nihilism corresponds to the trivial utility function—I think that SarahC did not intend to make this claim and was instead describing her subjective impressions of how emotional nihilism feels relative to a more common equilibrium emotional state.
I’m not sure exactly where in the conversation is the best place for me to inject this comment, but this may be as good a place as any.
I think that it is important to realize that only rational agents can be behaviorally modeled using a utility function. Non rational agents, including agents beset with “depression” or “nihilism”, don’t necessarily even have well-defined utility functions, and if they do have them, their behavior is not controlled by expected utility in the same way that the behavior of rational agents is controlled.
The success that the simple hypothesis of hyperbolic discounting has had in explaining akrasia has perhaps misled us into thinking that all departures from rationality can be modeled by simple tweaks to the standard machinery for modeling rational agents. It ain’t necessarily so.
If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
At some point, you can’t call it a utility function any more.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.
Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term “utility” means.
Sure, if you go beyond that, then the word “utility” might eventually become inappropriate, but that is not what is being proposed.
I can’t make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.
It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Yes, that’s what I already quoted. But earlier in the same comment you said this:
It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility).
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.
Her formalism may be wrong—it probably is, since it’s possible to have ordinary nihilism which permits minimal self-maintenance. For that matter, those hitting bottom rituals are still goal-directed behavior.
Still, pervasive akrasia or high-lethargy depression or whatever you want to call it does happen, and I think the post is a good effort at addressing it.
It should strive to be much better, at least this utility function mysticism could be avoided.
I agree with you and Nancy Lebovitz that it’s not literally the case that emotional nihilism corresponds to the trivial utility function—I think that SarahC did not intend to make this claim and was instead describing her subjective impressions of how emotional nihilism feels relative to a more common equilibrium emotional state.
I’m not sure exactly where in the conversation is the best place for me to inject this comment, but this may be as good a place as any.
I think that it is important to realize that only rational agents can be behaviorally modeled using a utility function. Non rational agents, including agents beset with “depression” or “nihilism”, don’t necessarily even have well-defined utility functions, and if they do have them, their behavior is not controlled by expected utility in the same way that the behavior of rational agents is controlled.
The success that the simple hypothesis of hyperbolic discounting has had in explaining akrasia has perhaps misled us into thinking that all departures from rationality can be modeled by simple tweaks to the standard machinery for modeling rational agents. It ain’t necessarily so.
If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
At some point, you can’t call it a utility function any more.
Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.
Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term “utility” means.
Sure, if you go beyond that, then the word “utility” might eventually become inappropriate, but that is not what is being proposed.
I can’t make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.
It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.
Hang on, a moment ago they were functions from outputs to values. Now they’re functions from inputs to values. Which are they?
Gonna take a wild stab:
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs.
Yes, that’s what I already quoted. But earlier in the same comment you said this:
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.