van Gelder holds that an algorithmic approach is simply insuitable for understanding the centrifugal governor. It just doesn’t work, and there’s no reason to even try. To understand the behavior of centrifugal governor, the appropriate tool to use are differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other.
Changing a parameter of a dynamical system changes its total dynamics (that is, the way its state variables change their values depending on their current values, across the full range of values they may take). Thus, any change in engine speed, no matter how small, changes not the state of the governor directly, but rather the way the state of the governor changes, and any change in arm angle changes the way the state of the engine changes. Again, however, the overall system (coupled engine and governor) settles quickly into a point attractor, that is, engine speed and arm angle remain constant.
Now we finally get into utility functions. van Gelder holds that all the various utility theories, no matter how complex, remain subject to specific drawbacks:
(1) They do not incorporate any account of the underlying motivations that give rise to the utility that an object or outcome holds at a given time.
(2) They conceive of the utilities themselves as static values, and can offer no good account of how and why they might change over time, and why preferences are often inconsistent and inconstant.
(3) They offer no serious account of the deliberation process, with its attendant vacillations, inconsistencies, and distress; and they have nothing to say about the relationships that have been uncovered between time spent deliberating and the choices eventually made.
Curiously, these drawbacks appear to have a common theme; they all concern, one way or another, temporal aspects of decision making. It is worth asking whether they arise because of some deep structural feature inherent in the whole framework which conceptualizes decision-making behavior in terms of calculating expected utilities.
Notice that utility-theory based accounts of human decision making (“utility theories”) are deeply akin to the computational solution to the governing task. That is, if we take such accounts as not just describing the outcome of decision-making behavior, but also as a guide to the structures and processes that underlie such behavior, then there are basic structural similarities to the computational governor. Thus, utility theories are straightforwardly computational; they are based on static representations of options, utilities, probabilities, and so on, and processing is the algorithmically specifiable internal manipulation of these representations to obtain a final representation of the choice to be made. Consequently, utility theories are strictly sequential; they presuppose some initial temporal stage at which the relevant information about options, likelihoods, and so on, is acquired; a second stage in which expected utilities are calculated; and a third stage at which the choice is effected in actual behavior. And, like the computational governor, they are essentially atemporal; there are no inherent constraints on the timing of the various internal operations with respect to each other or change in the environment.
What we have, in other words, is a model of human cognition which, on one hand, instantiates the same deep structure as the computational governor, and on the other, seems structurally incapable of accounting for certain essentially temporal dimensions of decision-making behavior. At this stage, we might ask: What kind of model of decision-making behavior we would get if, rather, we took the centrifugal governor as a prototype? It would be a model with a relatively small number of continuous variables influencing each other in real time. It would be governed by nonlinear differential equations. And it would be a model in which the agent and the choice environment, like the governor and the engine, are tightly interlocked.
It would, in short, be rather like the motivational oscillatory theory (MOT) modeling framework described by mathematical psychologist James Townsend. MOT enables modeling of various qualitative properties of the kind of cyclical behaviors that occur when circumstances offer the possibility of satiation of desires arising from more or less permanent motivations; an obvious example is regular eating in response to recurrent natural hunger. It is built around the idea that in such situations, your underlying motivation, transitory desires with regard to the object, distance from the object, and consumption of it are continuously evolving and affecting each other in real time; for example, if your desire for food is high and you are far from it, you will move toward it (that is, z changes), which influences your satiation and so your desire. The framework thus includes variables for the current state of motivation, satiation, preference, and action (movement), and a set of differential equations describe how these variables change over time as a function of the current state of the system.
van Gelder holds that an algorithmic approach is simply insuitable for understanding the centrifugal governor. It just doesn’t work, and there’s no reason to even try. To understand the behavior of centrifugal governor, the appropriate tool to use are differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other.
A set of differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other, would still be an algorithm. van Gelder appears not to have heard of universal computation.
(1) They do not incorporate any account of the underlying motivations that give rise to the utility that an object or outcome holds at a given time.
I would say that the selection and representation of values is exactly this account.
(2) They conceive of the utilities themselves as static values, and can offer no good account of how and why they might change over time
False.
and why preferences are often inconsistent and inconstant.
Perceived preferences are often inconsistent and inconstant. So you try to find underlying preferences.
(3) They offer no serious account of the deliberation process, with its attendant vacillations, inconsistencies, and distress; and they have nothing to say about the relationships that have been uncovered between time spent deliberating and the choices eventually made.
Also false. The utility function itself is precisely a model of the deliberation process. It isn’t going to be an equation that fits on a single line. And it is going to have some computational complexity, which will relate the relationships between time spent deliberating and the choice eventually made.
I hope—because this is the most charitable interpretation I can make—that all these people complaining about utility functions are just forgetting that it uses the word “function”. Not “arithmetic function”, or “regular expression”. Any computable function. If an output can’t be modelled with a utility function, it is non-computable. If humans can’t be modelled with utility functions, that is a proof that a computer program can’t be intelligent. I’m not concerned with whether this is a good model. I just want to able to say, theoretically, that the question of what a human should do in response to a situation, is something that can be said to have right answers and wrong answers, given that human’s values/preferences/morals.
All this harping about whether utility functions can model humans is not very relevant to my post. I bring up utility functions only to communicate, to a LW audience, that you are only doing what you want to do when you behave morally. If you have some other meaningful way of stating this—of saying what it means to “do what you want to do”—by all means do so!
(If you want to work with meta-ethics, and ask why some things are right and some things are wrong, you do have to work with utility functions, if you believe anything like the account in the meta-ethics sequence; for the same reason that evolution needs to talk about fitness. If you just want to talk about what humans do—which is what I’m doing here—you don’t have to talk about utility functions unless you want to be able to evaluate whether a particular human is behaving morally or immorally. To make such a judgement, you have to have an algorithm that computes a judgement on an action in a situation. An that algorithm computes a utility function.)
A set of differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other, would still be an algorithm.
Sure, but that’s not the sense of “algorithm” that was being used here.
If an output can’t be modelled with a utility function, it is non-computable. If humans can’t be modelled with utility functions, that is a proof that a computer program can’t be intelligent. I’m not concerned with whether this is a good model. I just want to able to say, theoretically, that the question of what a human should do in response to a situation, is something that can be said to have right answers and wrong answers, given that human’s values/preferences/morals.
None of this is being questioned. You said that you’re not concerned with whether this is a good model, and that’s fine, but whether or not it is a good model was the whole point of my comment. Neither I nor van Gelder claimed that utility functions couldn’t be used as models in principle.
All this harping about whether utility functions can model humans is not very relevant to my post. I bring up utility functions only to communicate, to a LW audience, that you are only doing what you want to do when you behave morally.
My comments did not question the conclusions of your post (which I agreed with and upvoted). I was only the addressing the particular paragraph which I quoted in my initial comment. (I should probably have mentioned that IAWYC in that one. I’ll edit that in now.)
Sorry. I’m getting very touchy about references utility functions now. When I write a post, I want to feel like I’m discussing a topic. On this post, I feel like I’m trying to compile C++ code and the comments are syntax error messages. I’m pretty much worn out on the subject for now, and probably getting sloppy, even though the post could still use a lot of clarification.
No problem—I could have expressed myself more clearly, as well.
Take it positively: if people only mostly nitpick on your utility function bit, then that implies that they agree with the rest of what you wrote. I didn’t have much disagreement with the actual content of your post, either.
(part two)
van Gelder holds that an algorithmic approach is simply insuitable for understanding the centrifugal governor. It just doesn’t work, and there’s no reason to even try. To understand the behavior of centrifugal governor, the appropriate tool to use are differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other.
Now we finally get into utility functions. van Gelder holds that all the various utility theories, no matter how complex, remain subject to specific drawbacks:
Kaj may be too humble to self-link the relevant top level post, so I’ll do it for him.
I actually didn’t link to it, because I felt that those comments ended up conveying the same point better than the post did.
A set of differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other, would still be an algorithm. van Gelder appears not to have heard of universal computation.
I would say that the selection and representation of values is exactly this account.
False.
Perceived preferences are often inconsistent and inconstant. So you try to find underlying preferences.
Also false. The utility function itself is precisely a model of the deliberation process. It isn’t going to be an equation that fits on a single line. And it is going to have some computational complexity, which will relate the relationships between time spent deliberating and the choice eventually made.
I hope—because this is the most charitable interpretation I can make—that all these people complaining about utility functions are just forgetting that it uses the word “function”. Not “arithmetic function”, or “regular expression”. Any computable function. If an output can’t be modelled with a utility function, it is non-computable. If humans can’t be modelled with utility functions, that is a proof that a computer program can’t be intelligent. I’m not concerned with whether this is a good model. I just want to able to say, theoretically, that the question of what a human should do in response to a situation, is something that can be said to have right answers and wrong answers, given that human’s values/preferences/morals.
All this harping about whether utility functions can model humans is not very relevant to my post. I bring up utility functions only to communicate, to a LW audience, that you are only doing what you want to do when you behave morally. If you have some other meaningful way of stating this—of saying what it means to “do what you want to do”—by all means do so!
(If you want to work with meta-ethics, and ask why some things are right and some things are wrong, you do have to work with utility functions, if you believe anything like the account in the meta-ethics sequence; for the same reason that evolution needs to talk about fitness. If you just want to talk about what humans do—which is what I’m doing here—you don’t have to talk about utility functions unless you want to be able to evaluate whether a particular human is behaving morally or immorally. To make such a judgement, you have to have an algorithm that computes a judgement on an action in a situation. An that algorithm computes a utility function.)
Sure, but that’s not the sense of “algorithm” that was being used here.
None of this is being questioned. You said that you’re not concerned with whether this is a good model, and that’s fine, but whether or not it is a good model was the whole point of my comment. Neither I nor van Gelder claimed that utility functions couldn’t be used as models in principle.
My comments did not question the conclusions of your post (which I agreed with and upvoted). I was only the addressing the particular paragraph which I quoted in my initial comment. (I should probably have mentioned that IAWYC in that one. I’ll edit that in now.)
Sorry. I’m getting very touchy about references utility functions now. When I write a post, I want to feel like I’m discussing a topic. On this post, I feel like I’m trying to compile C++ code and the comments are syntax error messages. I’m pretty much worn out on the subject for now, and probably getting sloppy, even though the post could still use a lot of clarification.
No problem—I could have expressed myself more clearly, as well.
Take it positively: if people only mostly nitpick on your utility function bit, then that implies that they agree with the rest of what you wrote. I didn’t have much disagreement with the actual content of your post, either.