“I just don’t care” is a curiosity-stopper. The actions of a “nihilistic” person of the kinds you describe are still very specific: they don’t convulse uncontrollably, choosing to send random signals down their nerves. Thus, “all-zero utility function” is an incorrect model of the situation, making further analysis flawed.
good point, and something to think about. Obviously someone who assigned truly equal value to every possible action would behave completely at random, which nobody does.
A better guess: what happens when you feel nihilistic is anhedonia. You don’t get as much value or satisfaction out of the “peaks”—experiences that once were very desirable are now less so. This results in expending much less effort to attain the most desirable things. Your ability to desire intensely is messed up.
I think you could model that by flattening out the peaks. It leaves most processes intact (you still speak in language, you still put on clothes, etc.) but it diminishes motivation, anticipation, and happiness. You can do a little goal-directed activity (rock-bottom rituals, choosing to eat or sleep) but much less than normal.
Her formalism may be wrong—it probably is, since it’s possible to have ordinary nihilism which permits minimal self-maintenance. For that matter, those hitting bottom rituals are still goal-directed behavior.
Still, pervasive akrasia or high-lethargy depression or whatever you want to call it does happen, and I think the post is a good effort at addressing it.
I agree with you and Nancy Lebovitz that it’s not literally the case that emotional nihilism corresponds to the trivial utility function—I think that SarahC did not intend to make this claim and was instead describing her subjective impressions of how emotional nihilism feels relative to a more common equilibrium emotional state.
I’m not sure exactly where in the conversation is the best place for me to inject this comment, but this may be as good a place as any.
I think that it is important to realize that only rational agents can be behaviorally modeled using a utility function. Non rational agents, including agents beset with “depression” or “nihilism”, don’t necessarily even have well-defined utility functions, and if they do have them, their behavior is not controlled by expected utility in the same way that the behavior of rational agents is controlled.
The success that the simple hypothesis of hyperbolic discounting has had in explaining akrasia has perhaps misled us into thinking that all departures from rationality can be modeled by simple tweaks to the standard machinery for modeling rational agents. It ain’t necessarily so.
If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
At some point, you can’t call it a utility function any more.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.
Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term “utility” means.
Sure, if you go beyond that, then the word “utility” might eventually become inappropriate, but that is not what is being proposed.
I can’t make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.
It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Yes, that’s what I already quoted. But earlier in the same comment you said this:
It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility).
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.
I don’t think the point of the post is about reaching complete nihilism. It’s reaching a point where you more-or-less think “what difference could it make” and then stop at “I don’t care”. It’s not exactly all utility being zero (because, like I said in my other comment, that would mean doing nothing, and there’s no way out then), but it’s damn near close and is a problem for just about anyone in the “nearby-nihilism” state.
Being indifferent doesn’t mean doing nothing. How would you privilege “doing nothing” over other courses of action, if you are indifferent to everything?
It’s less consciously privileging “doing nothing” over anything else so much as looking at everything else you’d usually do, not caring about any of those options, thinking up some alternatives, still not caring, and subsequently just doing nothing, possibly because it’s easiest.
Possibly, so I guess it’s not completely nihilism. Or it’s just null-set nihilism: If nothing seems worth doing, do nothing.
Note the fact that, in my original scenario, we considered alternative choices of action. I get the feeling a pure nihilistic engine wouldn’t even do that, so I’m already arguing from the wrong point.
“I just don’t care” is a curiosity-stopper. The actions of a “nihilistic” person of the kinds you describe are still very specific: they don’t convulse uncontrollably, choosing to send random signals down their nerves. Thus, “all-zero utility function” is an incorrect model of the situation, making further analysis flawed.
Agreed that all-zero utility function is more or less just wrong.
People like this can still remember what happiness is and wish that they were happy; they can dislike feeling nihilistic.
They can still experience all sorts of things as unpleasant, such as making an effort.
A state of mind in which happiness is very difficult to obtain and drive/motivation is at an extremely low ebb is not a zero utility function.
Nonetheless I find it very easy to understand why “zero utility function” would be used in this case as a poetic metaphor.
good point, and something to think about. Obviously someone who assigned truly equal value to every possible action would behave completely at random, which nobody does.
A better guess: what happens when you feel nihilistic is anhedonia. You don’t get as much value or satisfaction out of the “peaks”—experiences that once were very desirable are now less so. This results in expending much less effort to attain the most desirable things. Your ability to desire intensely is messed up.
I think you could model that by flattening out the peaks. It leaves most processes intact (you still speak in language, you still put on clothes, etc.) but it diminishes motivation, anticipation, and happiness. You can do a little goal-directed activity (rock-bottom rituals, choosing to eat or sleep) but much less than normal.
Yes, reduced intensity and resulting disturbed balance of psychological drives is a much better description.
Her formalism may be wrong—it probably is, since it’s possible to have ordinary nihilism which permits minimal self-maintenance. For that matter, those hitting bottom rituals are still goal-directed behavior.
Still, pervasive akrasia or high-lethargy depression or whatever you want to call it does happen, and I think the post is a good effort at addressing it.
It should strive to be much better, at least this utility function mysticism could be avoided.
I agree with you and Nancy Lebovitz that it’s not literally the case that emotional nihilism corresponds to the trivial utility function—I think that SarahC did not intend to make this claim and was instead describing her subjective impressions of how emotional nihilism feels relative to a more common equilibrium emotional state.
I’m not sure exactly where in the conversation is the best place for me to inject this comment, but this may be as good a place as any.
I think that it is important to realize that only rational agents can be behaviorally modeled using a utility function. Non rational agents, including agents beset with “depression” or “nihilism”, don’t necessarily even have well-defined utility functions, and if they do have them, their behavior is not controlled by expected utility in the same way that the behavior of rational agents is controlled.
The success that the simple hypothesis of hyperbolic discounting has had in explaining akrasia has perhaps misled us into thinking that all departures from rationality can be modeled by simple tweaks to the standard machinery for modeling rational agents. It ain’t necessarily so.
If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
At some point, you can’t call it a utility function any more.
Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.
Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term “utility” means.
Sure, if you go beyond that, then the word “utility” might eventually become inappropriate, but that is not what is being proposed.
I can’t make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.
It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.
Hang on, a moment ago they were functions from outputs to values. Now they’re functions from inputs to values. Which are they?
Gonna take a wild stab:
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs.
Yes, that’s what I already quoted. But earlier in the same comment you said this:
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.
I don’t think the point of the post is about reaching complete nihilism. It’s reaching a point where you more-or-less think “what difference could it make” and then stop at “I don’t care”. It’s not exactly all utility being zero (because, like I said in my other comment, that would mean doing nothing, and there’s no way out then), but it’s damn near close and is a problem for just about anyone in the “nearby-nihilism” state.
Being indifferent doesn’t mean doing nothing. How would you privilege “doing nothing” over other courses of action, if you are indifferent to everything?
It’s less consciously privileging “doing nothing” over anything else so much as looking at everything else you’d usually do, not caring about any of those options, thinking up some alternatives, still not caring, and subsequently just doing nothing, possibly because it’s easiest.
So one does still care about things being easy.
Possibly, so I guess it’s not completely nihilism. Or it’s just null-set nihilism: If nothing seems worth doing, do nothing.
Note the fact that, in my original scenario, we considered alternative choices of action. I get the feeling a pure nihilistic engine wouldn’t even do that, so I’m already arguing from the wrong point.