Humans don’t operate by maximizing utility, for any definition of “utility” that isn’t hideously tortured.
Actually, the definition of “utility” is pretty simple. It is simply “that thing that gets maximized in any particular person’s decision making”. Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one.
Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances.
Ok, that is a plausible sounding alternative to the idea of maximizing something. But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50. It only seems fair to give your idea some scrutiny too. Two questions jump out at me:
What decision is made when multiple choices all leave the variables within tolerance?
What decision is made when none of the available choices leave the variables within tolerance?
Looking forward to hearing your answer on these points. If we can turn your idea into a consistent and plausible theory of human decision making, I’m sure we can publish it.
Actually, the definition of “utility” is pretty simple. It is simply “that thing that gets maximized in any particular person’s decision making”
Ah, “the advantage of theft over honest toil”. Writing down a definite noun phrase does not guarantee the existence of a thing in reality that it names.
But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50.
Some specific references would help in discerning what, specifically, you are alluding to here. You say in another comment in this thread:
I have mostly cited the standard textbook thought-experiments
but you have not done this at all, merely made vague allusions to “the last 150 years” and “standard economic game theory”.
Well, you can’t get much more standard than Von Neumann and Morgenstern’s “Theory of Games and Economic Behaviour”. This book does not attempt to justify the hypothesis that we maximise something when we make decisions. That is an assumption that they adopt as part of the customary background for the questions they want to address. Historically, the assumption goes back to the questions about gambling that got probability theory started, in which there is a definite thing—money—that people can reasonably be regarded as maximising. Splitting utility from money eliminates complications due to diminishing marginal utility of money. The Utility Theorem does not prove, or attempt to prove, that we are maximisers. It is a not very deep mathematical theorem demonstrating that certain axioms on a set imply that it is isomorphic to an interval of the real line. The hypothesis that human preferences are accurately modelled as a function from choices to a set satisfying those axioms is nowhere addressed in the text.
I shall name this the Utility Hypothesis. What evidence are you depending on for asserting it?
Ah, “the advantage of theft over honest toil”. Writing down a definite noun phrase does not guarantee the existence of a thing in reality that it names.
That isn’t a particularly good example. There are advantages to theft over honest toil. It is just considered inappropriate to acknowledge them.
I have a whole stash of audio books that I purchased with the fruit of ‘honest toil’. I can no longer use them because they are crippled with DRM. I may be able to sift around and find the password somewhere but to be honest I suspect it would be far easier to go and ‘steal’ a copy.
Oh, then there’s the bit where you can get a whole lot of money and stuff for free. That’s an advantage!
I liked the metaphor. Russell was a smart man. But so was von Neumann, and Aumann and Myerson must have gotten their Nobel prizes for doing something useful.
Axiomatic “theft” has its place along side empirical “toil”
I liked the metaphor. Russell was a smart man. But so was von Neumann, and Aumann and Myerson must have gotten their Nobel prizes for doing something useful.
So, am I to understand that you like people with Nobel prizes? If I start writing the names of impressive people can I claim some of their status for myself too? How many times will I be able to do it before the claims start to wear thin?
Only if you are endorsing their ideas in the face of an opposition which cannot cite such names. ;)
I haven’t observed other people referencing those same names both before and after your appearance having all that much impact on you. Nor have I taken seriously your attempts to present a battle between “Perplexed and all Nobel prize winners” vs “others”. I’d be very surprised if the guys behind the names really had your back in these fights, even if you are convinced you are fighting in their honour.
Improvements to this version have been made by Savage and by Anscombe and Aumann. You can get a useful survey of the field from wikipedia. Wikipedia is an amazing resource, by the way. I strongly recommend it.
Two texts from my own bookshelf that contain expositions of this material are Chapter 1 of Myerson and Chapter 2 of Luce and Raiffa. I would recommend the Myerson. Luce and Raiffa is cheaper, but it is somewhat dated and doesn’t prove much coverage at all of the more advanced topics such as correlated equilibria and the revelation principle. It does have some good material on Nash’s program though.
And finally, for a bit of fun in the spirit of Project Steve, I offer this online bibliography of some of the ways this body of theory has been applied in one particular field.
The hypothesis that human preferences are accurately modelled as a function from choices to a set satisfying those axioms is nowhere addressed in the text.
I shall name this the Utility Hypothesis. What evidence are you depending on for asserting it?
Did I assert it? Where? I apologize profusely if I did anything more than to suggest that it provides a useful model for the more important and carefully considered economic decisions. I explicitly state here that the justification of the theory is not empirical. The theory is about rational decision making, not human decision making.
It is not. As I said, the authors do not attempt to justify the Utility Hypothesis, they assume it. Chapter 2 (not 3), page 8: “This problem [of what to assume about individuals in economic theory] has been stated traditionally by assuming that the consumer desires to obtain a maximum of utility or satisfaction and the entrepreneur a maximum of profits.” The entire book is about the implications of that assumption, not its justificaation, of which it says nothing.
Improvements to this version have been made by Savage and by Anscombe and Aumann.
Neither do these authors attempt to justify the Utility Hypothesis; they too assume it. I can find Luce and Raiffa in my library and Myerson through inter-library loan, but as none of the first three works you’ve cited provide evidence for the claim that people have utility functions, rather than postulating it as an axiom, I doubt that these would either.
But now you deny having asserted any such thing:
Did I assert [the Utility Hypothesis]? Where?
Here you claim that people have utility functions:
I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
And also here:
Parents clearly include their children’s welfare in their own utility functions.
Here you assume that people must be talking about utility functions:
If you and EY think that the PD players don’t like to rat on their friends, all you are saying is that those standard PD payoffs aren’t the ones that match the players’ real utility functions, because the real functions would include a hefty penalty for being a rat.
Referring to the message from which the last three quotes are taken, you say
I explicitly state here that the justification of the theory is not empirical.
and yet here you expand the phrase “prefer to assume” as :
I mean that making assumptions as I suggest leads to a much more satisfactory model of the issues being discussed here. I don’t claim my viewpoint is closer to reality (though the lack of an omniscient Omega certainly ought to give me a few points for style in that contest!). I claim that my viewpoint leads to a more useful model—it makes better predictions, is more computationally tractable, is more suggestive of ways to improve human institutions, etc.
These are weasel words to let you talk about utility functions while denying you think there are any such things.
How would you set about finding a model that is closer to reality, rather than one which merely makes better predictions?
How would you set about finding a model that is closer to reality, rather than one which merely makes better predictions?
I would undertake an arduous self-education in neuroscience. Thankfully, I have no interest in cognitive models which are close to reality but make bad predictions. I’m no longer as good at learning whole new fields as I was when I was younger, so I would find neuroscience a tough slog.
It’s a losing battle to describe humans as utility maximizers. Utility, as applied to people, is more useful in the normative sense, as a way to formulate one’s wishes, allowing to infer the way one should act in order to follow them.
Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.
For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.
Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.
The reason it does so is because it is convenient.
I don’t entirely agree with pgeby. Being unable to adequately approximate human preferences to a single utility function is not something that is a property of the ‘real world’. It is something that is a property of our rather significant limitations when it comes to making such evaluations. Nevertheless, having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.
… having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.
I’ll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won’t, since I doubt I will live long enough to see that. ;)
But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent—so absurd that no one in their right mind would propose it.
I’ll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won’t, since I doubt I will live long enough to see that. ;)
I see that you are trying to be snide, but it took a while to figure out why you would believe this to be incisive. I had to reconstruct a model of what you think other people here believe from your previous rants.
But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent—so absurd that no one in their right mind would propose it.
Yes. That would be a crazy thing to believe. (Mind you, I don’t think pjeby believes crazy things—he just isn’t listening closely enough to what you are saying to notice anything other than a nail upon which to use one of his favourite hammers.)
For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.
It seems to me that what has actually been shown is that when people think abstractly (i.e. “far”) about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility.
However, when people actually act (using “near” thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread.
What’s more, even their “far” calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating “utility” according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of “best car” is going to vary from one day to the next, based on priming and other factors.)
In the very best case scenario for utility maximization, we aren’t even all that motivated to go out and maximize utility: it’s still more like playing, “pick the best perceived-available option”, which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn’t be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent—even if said agent were of only-human intelligence.
standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers.
For me, I’m not sure that “rational” and “utility maximizer” belong in the same sentence. ;-)
In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don’t mix under real world conditions. You can’t measure a human’s perception of “utility” on just a single axis!
For me, I’m not sure that “rational” and “utility maximizer” belong in the same sentence. ;-)
In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don’t mix under real world conditions.
You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts.
You can’t measure a human’s perception of “utility” on just a single axis!
And you can’t (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
And you can’t (consistently) make a decision without comparing the alternatives along a single axis.
And what makes you think humans are any good at making consistent decisions?
The experimental evidence clearly says we’re not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don’t like the taste! Prime us with a number, and it changes what we’ll say we’re willing to pay for something utterly unrelated to the number.
Human beings are inconsistent by default.
And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
(Which is probably a factor in why smarter, more “rational” people are often less happy than their less-rational counterparts.)
In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed’s text books it is this function which can be appropriately described as a ‘rational utility function’.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
I am glad that you included the scare quotes around ‘rationally’. It is ‘rational’ to do what is going to get the best results. It is important to realise the difference between ‘sucking at making linearized spock-like decisions’ and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.
For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn’t guaranteed to have a total (i.e. linear) ordering.
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
True enough.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
But the information loss is “just in time”—it doesn’t take place until actually making a decision. The information about utilities that is “stored” is a mapping from states-of-the-world to ordinal utilities of each “result”. That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there’s probably a computer science doctorate in it for you, if not a math Nobel.
(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware—i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it’s just injecting noise into the selection process.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
I just built a simple natural deduction theorem prover for my project in AI class
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
The short answer is that human cognition is a mess. We don’t want to reproduce all the screw-ups in an intelligent machine—and what you are talking about lookss like one of the mistakes.
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
It loses information about human values, replacing them with noise in regions where a human would need to “think things over” to know what they think… unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
We really don’t want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion—and would steamroller the original human race pretty quickly. So, the “lost information” you are talking about is hopefully not going to be there in the first place.
Better to model humans and their goals as a part of the environment.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
And you can’t (consistently) make a decision without comparing the alternatives along a single axis.
And what makes you think humans are any good at making consistent decisions?
Nothing make me think that. I don’t even care. That is the business of people like Tversky and Kahneman.
They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.
Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.
Now, I’m not saying “more expressiveness is always better”, because, being human, I have the ability to value things non-fungibly. ;-)
However, in any context where we wish to be able to mathematically represent human preferences—and where lives are on the line by doing so—we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.
That’s why I consider the “economic games assumption” to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.
Heck, I’ll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
Heck, I’ll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.
Theorists are not as ignorant or mathematically naive as you seem to imagine.
“Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom.”
As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they’re begging the question, relative to this discussion.)
The basis of using utilities is that you can consider agent’s possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that—provided that your decision process is computable in the first place. That’s a pretty general framework—about the only assumption that can be argued with is its quantising of spacetime.
The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.
The basis of using utilities is that you can consider agent’s possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that—provided that your decision process is computable in the first place.
And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).
Humans don’t compute utility, then make a decision. Heck, we don’t even “make decisions” unless there’s some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!
This is a major (if not the major) “impedance mismatch” between linear “rationality” and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it’s really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.
The basis of using utilities is that you can consider agent’s possible
actions, assign real-valued utilities to them, and then choose the one
with the most utility. If you can use a utility function built from a
partially-recursive language, then you can always do that—provided
that your decision process is computable in the first place.
And that is not what humans do (although we can of course lamely
attempt to mimic that approach by trying to turn off all our parallel
processing and pretending to be a cheap sequential computer instead).
There’s nothing serial about utility maximisation!
...and it really doesn’t matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
There’s nothing serial about utility maximisation!
I didn’t say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren’t set up to do it in parallel.
...and it really doesn’t matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
There’s nothing serial about utility maximisation!
I didn’t say there was. I said that humans needed to switch to slow serial
processing in order to do it, because our brains aren’t set up to do it in parallel.
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
When humans don’t consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.
That type of general framework can model the behaviour of any
computable agent.
Great! So you can show me how to use a utility function to model being
indecisive or uncertain, then? ;-)
It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...
The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?
AFAICT, the entire notion of a cognitive architecture based on “pick options by utility” is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)
(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn’t have to know it has all the options, unless it has a low tolerance for “not knowing all the options”.)
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices?
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Possible, I’m not arguing that a utility maximizing agent would be simpler,
Good. ;-)
Only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive.
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
Not if it’s limited to motor fibers, it doesn’t. You’re still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an “action” can be something like choosing a goal.
If you still don’t see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don’t know what else to say.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You may find it useful to compare with a chess or go computer.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
And I’ve given you such a model, which you’ve steadfastly refused to actually
“wrap” in this way, but instead you just keep asserting that it can be done.
If it’s so simple, why not do it and prove me wrong?
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
I have previously described the “wrapping” in question in some detail here.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Simply wrap the I/O of the non-utility model, and then assign the (possibly compound) action the agent will actually take in each timestep utility 1 and assign all other actions a utility 0 - and then take the highest utility action in each timestep.
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
What decision is made when multiple choices all leave the variables within tolerance?
Whatever occurs to us first. ;-)
What decision is made when none of the available choices leave the variables within tolerance?
We waffle, or try to avoid making the decision in the first place. ;-) (See, e.g., typical people’s reactions to “trolley problems”, or other no-win scenarios.)
It is simply “that thing that gets maximized in any particular person’s decision making”. Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one
What I’m saying is that the above construction leads to error if you assume that “utility” is a function of the state of the world outside the human, rather than a function of the difference between the human’s perceptions of the outside world, and the human’s internal reference values or tolerance ranges for those perceptions.
Maximizing a utility function over the state of the external world inherently tends to create results that would be considered undesirable by most humans. (See, for example, the various tortured insanities that come about when you try to maximize such a conception of “utility” over a population of humans.)
It’s important to understand that the representation you use to compute something is not value-neutral. Roman numerals, for example, make division much more complicated than Arabic ones.
So, I’m not saying that you can’t create some sort of “utility” function to represent human values. We have no reason to assume that human values aren’t Turing-computable, and if they’re Turing-computable, we should be able to use whatever stupidly complex representation we want to compute them.
However, to use world-state-utility as your basis for computation is just plain silly, like using Roman numerals for long division. Your own intuition will make it harder for you to see the Friendliness-failures that are sitting right under your nose, because utility maximization is utterly foreign to normal human cognitive processes. (Externality-maximizing processes in human behavior are generally the result of pathology, rather than normal brain function.)
But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50.
Eliezer hasn’t been alive that long, has he? ;-)
Seriously, though, external-utility-maximizing thinking is the very essence of Unfriendly AI, and the history of discussions of world-state-based utility is that models based on it lead to counterintuitive results unless you torture the utility function hard enough, and/or carefully avoid the sort of creative thinking that an unfettered superintelligence might come up with.
Mostly, we simply act in ways that keep the expected value of relevant
perceptual variables (such as our own feelings) within our
personally-defined tolerances.
Ok, that is a plausible sounding alternative to the idea of maximizing something.
It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the “personally-defined tolerances” are exceeded. Presto!
It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the “personally-defined tolerances” are exceeded. Presto!
Not quite—this would imply that tolerance-difference is fungible, and it’s not. We can make trade-offs in our decision-making, but that requires conscious effort and it’s a process more akin to barter than to money-trading.
That seems to be of questionable relevance—since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.
In the real world, when it is observed that a consumer purchased an orange, it is impossible to say what good or set of goods or behavioral options were discarded in preference of purchasing an orange. In this sense, preference is not revealed at all in the sense of ordinal utility.
However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That’s what makes ordinal utilities a “spherical cow” abstraction.
(WARP’s inapplicability when applied to real (non-spherical) humans, in one sentence: “I feel like having an apple today, instead of an orange.” QED: humans are not “economic agents” under WARP, since they don’t consistently choose A over B in environments where both A and B are available.)
However, even if you ignore that, WARP is trivially proven false by actual
human behavior: people demonstrably do sometimes choose differently
based on context. That’s what makes ordinal utilities a “spherical cow” abstraction.
The first sentence is true—but the second sentence doesn’t follow from it logically—or in any other way I can see.
It is true that there are some problems modelling humans as von Neumann–Morgenstern agents—but that’s no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.
The first sentence is true—but the second sentence doesn’t follow from it logically—or in any other way I can see
WARP can’t be used to predict a human’s behavior in even the most trivial real situations. That makes it a “spherical cow” because it’s a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy.
It is true that there are some problems modelling humans as von Neumann–Morgenstern agents—but that’s no reason to throw out the concept of utility.
That sounds to me uncannily similar to, “it is true that there are some problems modeling celestial movement using crystal spheres—but that’s no reason to throw out the concept of celestial bodies moving in perfect circles.”
There is an obvious surface similarity—but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
How so? I’m pointing out that the only actual intelligent agents we know of don’t actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. “crystal spheres”.
Economic agency/utility models may have the Platonic purity of crystal spheres, but:
We know for a fact they’re not what actually happens in reality, and
They have to be tortured considerably to make them “predict” what happens in reality.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Computers can model anything. That is because they are universal. It doesn’t matter that computers work differently inside from the thing they are modelling.
Just the same applies to partially-recursive utility functions—they are a universal modelling tool—and can model any computable agent.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Not at all. I’m saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human’s behavior with a utility function, than it does to describe a human with interests and tolerances.
Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:
they are a universal modelling tool
Right. That very universality is exactly what makes them a poor model of human intelligence: they don’t concentrate probability space in the same way, and therefore don’t compress well.
Sure—but whay you claimed was a “spherical cow” was “ordinal utilities” which is a totally different concept.
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don’t constitute evidence for the usefulness of ordinal utility.
That’s treating the “Weak Axiom of Revealed Preference” as the “weakest form” of revealed preference. However, that is not something that I consider to be correct.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
the definition of “utility” is pretty simple. It is simply “that thing that gets maximized in any particular person’s decision making”.
This definition sounds dangerously vacuous to me.
Of course, you can always give some consistent parametrization of (agent,choice,situation) triplets so that choice C made by agent A in situation S is always maximal among all available choices. If you call this function “utility”, then it is mathematically trivial that “Agents always maximize utility.” However, the usefulness of this approach is very low without additional constraints on the utility function.
I’d be really curious to see some pointers to the “maximizing theory” you think survived a 50 years of “strong scrutiny”.
The obvious way to combine the two systems—tolerance and utility—is to say that stimuli that exceed our tolerances prompt us to ask questions about how to solve a problem, and utility calculations answer those questions. This is not an original idea on my part, but I do not remember where I read about it.
What decision is made when multiple choices all leave the variables within tolerance?
The one that appears to maximize utility after a brief period of analysis. For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.
What decision is made when none of the available choices leave the variables within tolerance?
A lack of acceptable alternatives leads to stress, which (a) broadens the range of acceptable outcomes, and (b) motivates future analysis about how to avoid similar situations in the future. For example, I want ice cream; my ice cream satisfaction index is well below tolerance; unfortunately, I am in the desert. I find this situation unpleasant, and eventually reconcile myself to the fact that my ice cream satisfaction level will remain below what was previously thought of as ‘minimum’ tolerance for some time, however, upon returning to civilization, I will have a lower tolerance for ‘desert-related excursions’ and may attempt to avoid further trips through the desert.
Note that ‘minimum’ tolerance refers to the minimum level that will lead to casual selection of an acceptable alternative, rather than the minimum level that allows my decision system to continue functioning.
For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.
Actually, I’d tend to say that you are not so much maximizing the utility of your ice cream choice, as you are ensuring that your expected satisfaction with your choice is within tolerance.
To put it another way, it’s unlikely that you’ll actually weigh price, cost, and taste, in some sort of unified scoring system.
Instead, what will happen is that you’ll consider options that aren’t already ruled out by cached memories (e.g. you hate that flavor), and then predict whether that choice will throw any other variables out of tolerance. i.e., “this one costs too much… those nuts will give me indigestion… that’s way too big for my appetite… this one would taste good, but it just doesn’t seem like what I really want...”
Yes, some people do search for the “best” choice in certain circumstances, and would need to exhaustively consider the options in those cases. But this is not a matter of maximizing some world-state-utility, it is simply that each choice is also being checked against a, “can I be certain I’ve made the best choice yet?” perception.
Even when we heavily engage our logical minds in search of “optimum” solutions, this cognition is still primarily guided by these kinds of asynchronous perceptual checks, just ones like, “Is this formula really as elegant as I want it to be?” instead.
Very interesting. There’s a lot of truth in what you say. If anyone reading this can link to experiments or even experimental designs that try to figure out when people typically rely on tolerances vs. utilities, I’d greatly appreciate it.
To put it another way, it’s unlikely that you’ll actually weigh price, [nutrition], and taste, in some sort of unified scoring system.
Y’know, most people probably don’t, and at times I certainly do take actions based entirely on nested tolerance-satisfaction. When I’m consciously aware that I’m making a decision, though, I tend to weigh the utilities, even for a minor choice like ice cream flavor. This may be part of why I felt estranged enough from modern society in the first place to want to participate in a blog like Less Wrong.
Even when we heavily engage our logical minds in search of “optimum” solutions, … each choice is also being checked against a, “can I be certain I’ve made the best choice yet?” perception.
OK, so you’ve hit on the behavioral mechanism that helps me decide how much time I want to spend on a decision...90 seconds or so is usually the upper bound on how much time I will comfortably and casually spend on selecting an ice cream flavor. If I take too much time to decide, then my “overthinking” tolerance is exceeded and alarm bells go off; if I feel too uncertain about my decision, then my “uncertainty” tolerance is exceeded and alarm bells go off; if neither continuing to think about ice cream nor ending my thoughts about ice cream will silence both alarm bells, then I feel stress and broaden my tolerance and try to avoid the situation in the future, probably by hiring a really good psychotherapist.
But that’s just the criteria for how long to think...not for what to think about. While I’m thinking about ice cream, I really am trying to maximize my ice-cream-related world-state-utility. I suspect that other people, for somewhat more important decisions, e.g., what car shall I buy, behave the same way—it seems a bit cynical to me to say that people make the decision to buy a car because they’ve concluded that their car-buying analysis is sufficiently elegant; they probably buy the car or walk out of the dealership when they’ve concluded that the action will very probably significantly improve their car-related world-state-utility.
I really am trying to maximize my ice-cream-related world-state-utility
And how often, while doing this, do you invent new ice cream options in an effort to increase the utility beyond that offered by the available choices?
How many new ice cream flavors have you invented, or decided to ask for mixed together?
So now you say, “Ah, but it would take too long to do those things.” And I say, “Yep, there goes another asynchronous prediction of an exceeded perceptual tolerance.”
“Okay,” you say, “so, I’m a bounded utility calculator.”
“Really? Okay, what scoring system do you use to arrive at a combined rating on all these criteria that you’re using? Do you even know what criteria you’re using?”
Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?
The experimental data says that when it comes to making these estimates, your brain is massively subject to priming and anchoring effects—so your “utility” being some kind of rational calculation is probably illusory to start with.
It seems a bit cynical to me to say that people make the decision to buy a car because they’ve concluded that their car-buying analysis is sufficiently elegant;
I was referring to the perceptions involved in a task like computer programming, not car-buying.
Part of the point is that every task has its own set of regulating perceptions.
they probably buy the car or walk out of the dealership when they’ve concluded that the action will very probably significantly improve their car-related world-state-utility.
They do it when they find a car that leads to an”acceptable “satisfaction” level.
Part of my point about things like time, elegance, “best”-ness, etc. though, is that they ALL factor into what “acceptable” means.
“Satisfaction”, in other words, is a semi-prioritized measurement against tolerances on ALL car-buying-related perceptual predictions that get loaded into a person’s “working memory” during the process.
Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?
Aside: I have partaken of the garlic ice-cream, and lo, it is good.
I’m not joking, either about its existence or its gustatory virtues. I’m trying to remember where the devil I had it; ah yes, these fine folks served it at Taste of Edmonton (a sort of outdoor food-fair with samples from local restaurants).
I’m not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.
Obviously we aren’t rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don’t dispute its validity. Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.
As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I’ve already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is “illusory.”
Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me.
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
does not in any way convince me that my attempt to consult my own utility is “illusory.”
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)
Actually, the definition of “utility” is pretty simple. It is simply “that thing that gets maximized in any particular person’s decision making”. Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one.
Ok, that is a plausible sounding alternative to the idea of maximizing something. But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50. It only seems fair to give your idea some scrutiny too. Two questions jump out at me:
What decision is made when multiple choices all leave the variables within tolerance?
What decision is made when none of the available choices leave the variables within tolerance?
Looking forward to hearing your answer on these points. If we can turn your idea into a consistent and plausible theory of human decision making, I’m sure we can publish it.
Ah, “the advantage of theft over honest toil”. Writing down a definite noun phrase does not guarantee the existence of a thing in reality that it names.
Some specific references would help in discerning what, specifically, you are alluding to here. You say in another comment in this thread:
but you have not done this at all, merely made vague allusions to “the last 150 years” and “standard economic game theory”.
Well, you can’t get much more standard than Von Neumann and Morgenstern’s “Theory of Games and Economic Behaviour”. This book does not attempt to justify the hypothesis that we maximise something when we make decisions. That is an assumption that they adopt as part of the customary background for the questions they want to address. Historically, the assumption goes back to the questions about gambling that got probability theory started, in which there is a definite thing—money—that people can reasonably be regarded as maximising. Splitting utility from money eliminates complications due to diminishing marginal utility of money. The Utility Theorem does not prove, or attempt to prove, that we are maximisers. It is a not very deep mathematical theorem demonstrating that certain axioms on a set imply that it is isomorphic to an interval of the real line. The hypothesis that human preferences are accurately modelled as a function from choices to a set satisfying those axioms is nowhere addressed in the text.
I shall name this the Utility Hypothesis. What evidence are you depending on for asserting it?
That isn’t a particularly good example. There are advantages to theft over honest toil. It is just considered inappropriate to acknowledge them.
I have a whole stash of audio books that I purchased with the fruit of ‘honest toil’. I can no longer use them because they are crippled with DRM. I may be able to sift around and find the password somewhere but to be honest I suspect it would be far easier to go and ‘steal’ a copy.
Oh, then there’s the bit where you can get a whole lot of money and stuff for free. That’s an advantage!
It’s a metaphor.
My point being that it is a bad metaphor.
I liked the metaphor. Russell was a smart man. But so was von Neumann, and Aumann and Myerson must have gotten their Nobel prizes for doing something useful.
Axiomatic “theft” has its place along side empirical “toil”
So, am I to understand that you like people with Nobel prizes? If I start writing the names of impressive people can I claim some of their status for myself too? How many times will I be able to do it before the claims start to wear thin?
Before I broke down and hit the Kibitz button I had a strong hunch that Clippy had written the above. Interesting. ;)
Only if you are endorsing their ideas in the face of an opposition which cannot cite such names. ;)
Sorry if it is wearing thin, but I am also tired of being attacked as if the ideas I am promoting mark me as some kind of crank.
I haven’t observed other people referencing those same names both before and after your appearance having all that much impact on you. Nor have I taken seriously your attempts to present a battle between “Perplexed and all Nobel prize winners” vs “others”. I’d be very surprised if the guys behind the names really had your back in these fights, even if you are convinced you are fighting in their honour.
Sure. Happy to help. I too sometimes have days when I can’t remember how to work that “Google” thing.
You mention Von Neumann and Morgenstern’s “Theory of Games and Economic Behaviour” yourself—as you can see, I have added an Amazon link. The relevant chapter is #3.
Improvements to this version have been made by Savage and by Anscombe and Aumann. You can get a useful survey of the field from wikipedia. Wikipedia is an amazing resource, by the way. I strongly recommend it.
Two texts from my own bookshelf that contain expositions of this material are Chapter 1 of Myerson and Chapter 2 of Luce and Raiffa. I would recommend the Myerson. Luce and Raiffa is cheaper, but it is somewhat dated and doesn’t prove much coverage at all of the more advanced topics such as correlated equilibria and the revelation principle. It does have some good material on Nash’s program though.
And finally, for a bit of fun in the spirit of Project Steve, I offer this online bibliography of some of the ways this body of theory has been applied in one particular field.
Did I assert it? Where? I apologize profusely if I did anything more than to suggest that it provides a useful model for the more important and carefully considered economic decisions. I explicitly state here that the justification of the theory is not empirical. The theory is about rational decision making, not human decision making.
It is not. As I said, the authors do not attempt to justify the Utility Hypothesis, they assume it. Chapter 2 (not 3), page 8: “This problem [of what to assume about individuals in economic theory] has been stated traditionally by assuming that the consumer desires to obtain a maximum of utility or satisfaction and the entrepreneur a maximum of profits.” The entire book is about the implications of that assumption, not its justificaation, of which it says nothing.
Neither do these authors attempt to justify the Utility Hypothesis; they too assume it. I can find Luce and Raiffa in my library and Myerson through inter-library loan, but as none of the first three works you’ve cited provide evidence for the claim that people have utility functions, rather than postulating it as an axiom, I doubt that these would either.
But now you deny having asserted any such thing:
Here you claim that people have utility functions:
And also here:
Here you assume that people must be talking about utility functions:
Referring to the message from which the last three quotes are taken, you say
and yet here you expand the phrase “prefer to assume” as :
These are weasel words to let you talk about utility functions while denying you think there are any such things.
How would you set about finding a model that is closer to reality, rather than one which merely makes better predictions?
I would undertake an arduous self-education in neuroscience. Thankfully, I have no interest in cognitive models which are close to reality but make bad predictions. I’m no longer as good at learning whole new fields as I was when I was younger, so I would find neuroscience a tough slog.
It’s a losing battle to describe humans as utility maximizers. Utility, as applied to people, is more useful in the normative sense, as a way to formulate one’s wishes, allowing to infer the way one should act in order to follow them.
Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.
For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.
The reason it does so is because it is convenient.
I don’t entirely agree with pgeby. Being unable to adequately approximate human preferences to a single utility function is not something that is a property of the ‘real world’. It is something that is a property of our rather significant limitations when it comes to making such evaluations. Nevertheless, having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.
I’ll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won’t, since I doubt I will live long enough to see that. ;)
But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent—so absurd that no one in their right mind would propose it.
I see that you are trying to be snide, but it took a while to figure out why you would believe this to be incisive. I had to reconstruct a model of what you think other people here believe from your previous rants.
Yes. That would be a crazy thing to believe. (Mind you, I don’t think pjeby believes crazy things—he just isn’t listening closely enough to what you are saying to notice anything other than a nail upon which to use one of his favourite hammers.)
It seems to me that what has actually been shown is that when people think abstractly (i.e. “far”) about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility.
However, when people actually act (using “near” thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread.
What’s more, even their “far” calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating “utility” according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of “best car” is going to vary from one day to the next, based on priming and other factors.)
In the very best case scenario for utility maximization, we aren’t even all that motivated to go out and maximize utility: it’s still more like playing, “pick the best perceived-available option”, which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn’t be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent—even if said agent were of only-human intelligence.
For me, I’m not sure that “rational” and “utility maximizer” belong in the same sentence. ;-)
In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don’t mix under real world conditions. You can’t measure a human’s perception of “utility” on just a single axis!
You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts.
And you can’t (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
And what makes you think humans are any good at making consistent decisions?
The experimental evidence clearly says we’re not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don’t like the taste! Prime us with a number, and it changes what we’ll say we’re willing to pay for something utterly unrelated to the number.
Human beings are inconsistent by default.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
(Which is probably a factor in why smarter, more “rational” people are often less happy than their less-rational counterparts.)
In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed’s text books it is this function which can be appropriately described as a ‘rational utility function’.
I am glad that you included the scare quotes around ‘rationally’. It is ‘rational’ to do what is going to get the best results. It is important to realise the difference between ‘sucking at making linearized spock-like decisions’ and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.
For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn’t guaranteed to have a total (i.e. linear) ordering.
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
True enough.
But the information loss is “just in time”—it doesn’t take place until actually making a decision. The information about utilities that is “stored” is a mapping from states-of-the-world to ordinal utilities of each “result”. That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there’s probably a computer science doctorate in it for you, if not a math Nobel.
If you can do that, I’ll happily admit being wrong, and steal your algorithm for my predicate dispatch implementation.
(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware—i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it’s just injecting noise into the selection process.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
The short answer is that human cognition is a mess. We don’t want to reproduce all the screw-ups in an intelligent machine—and what you are talking about lookss like one of the mistakes.
It loses information about human values, replacing them with noise in regions where a human would need to “think things over” to know what they think… unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
We really don’t want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion—and would steamroller the original human race pretty quickly. So, the “lost information” you are talking about is hopefully not going to be there in the first place.
Better to model humans and their goals as a part of the environment.
Perplexed answered this question well.
Nothing make me think that. I don’t even care. That is the business of people like Tversky and Kahneman.
They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.
Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.
Now, I’m not saying “more expressiveness is always better”, because, being human, I have the ability to value things non-fungibly. ;-)
However, in any context where we wish to be able to mathematically represent human preferences—and where lives are on the line by doing so—we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.
That’s why I consider the “economic games assumption” to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.
Heck, I’ll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.
Theorists are not as ignorant or mathematically naive as you seem to imagine.
You are talking about the independence axiom...?
You can just drop that, you know:
“Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom.”
As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they’re begging the question, relative to this discussion.)
The basis of using utilities is that you can consider agent’s possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that—provided that your decision process is computable in the first place. That’s a pretty general framework—about the only assumption that can be argued with is its quantising of spacetime.
The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.
And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).
Humans don’t compute utility, then make a decision. Heck, we don’t even “make decisions” unless there’s some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!
This is a major (if not the major) “impedance mismatch” between linear “rationality” and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it’s really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.
There’s nothing serial about utility maximisation!
...and it really doesn’t matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
I didn’t say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren’t set up to do it in parallel.
Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
No, I said that’s what a human would have to do in order to actually calculate utilities, since we don’t have utility-calculating hardware.
Ah—OK, then.
When humans don’t consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
It depends on the utility-maximizing framework you are talking about—some are more general than others—and some are really very general.
Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.
It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...
The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?
AFAICT, the entire notion of a cognitive architecture based on “pick options by utility” is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)
(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn’t have to know it has all the options, unless it has a low tolerance for “not knowing all the options”.)
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Good. ;-)
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
Not if it’s limited to motor fibers, it doesn’t. You’re still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an “action” can be something like choosing a goal.
If you still don’t see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don’t know what else to say.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
Yes, at the level of a giant look-up table. At that point it is not a useful abstraction.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Thanks, but I’ll pass.
(from the comment you linked)
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
I didn’t ignore non-motor actions—that is why I wrote “mostly”.
Whatever occurs to us first. ;-)
We waffle, or try to avoid making the decision in the first place. ;-) (See, e.g., typical people’s reactions to “trolley problems”, or other no-win scenarios.)
What I’m saying is that the above construction leads to error if you assume that “utility” is a function of the state of the world outside the human, rather than a function of the difference between the human’s perceptions of the outside world, and the human’s internal reference values or tolerance ranges for those perceptions.
Maximizing a utility function over the state of the external world inherently tends to create results that would be considered undesirable by most humans. (See, for example, the various tortured insanities that come about when you try to maximize such a conception of “utility” over a population of humans.)
It’s important to understand that the representation you use to compute something is not value-neutral. Roman numerals, for example, make division much more complicated than Arabic ones.
So, I’m not saying that you can’t create some sort of “utility” function to represent human values. We have no reason to assume that human values aren’t Turing-computable, and if they’re Turing-computable, we should be able to use whatever stupidly complex representation we want to compute them.
However, to use world-state-utility as your basis for computation is just plain silly, like using Roman numerals for long division. Your own intuition will make it harder for you to see the Friendliness-failures that are sitting right under your nose, because utility maximization is utterly foreign to normal human cognitive processes. (Externality-maximizing processes in human behavior are generally the result of pathology, rather than normal brain function.)
Eliezer hasn’t been alive that long, has he? ;-)
Seriously, though, external-utility-maximizing thinking is the very essence of Unfriendly AI, and the history of discussions of world-state-based utility is that models based on it lead to counterintuitive results unless you torture the utility function hard enough, and/or carefully avoid the sort of creative thinking that an unfettered superintelligence might come up with.
It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the “personally-defined tolerances” are exceeded. Presto!
Not quite—this would imply that tolerance-difference is fungible, and it’s not. We can make trade-offs in our decision-making, but that requires conscious effort and it’s a process more akin to barter than to money-trading.
Diamonds are not fungible—and yet they have prices. Same difference here, I figure.
What’s the price of one red paperclip? Is it the same price as a house?
That seems to be of questionable relevance—since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.
That’s a big part of the problem right there: humans aren’t “single agents” in this sense.
Humans are single agents in a number of senses—and are individual enough for the idea of revealed preference to be useful.
From the page you linked (emphasis added):
However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That’s what makes ordinal utilities a “spherical cow” abstraction.
(WARP’s inapplicability when applied to real (non-spherical) humans, in one sentence: “I feel like having an apple today, instead of an orange.” QED: humans are not “economic agents” under WARP, since they don’t consistently choose A over B in environments where both A and B are available.)
The first sentence is true—but the second sentence doesn’t follow from it logically—or in any other way I can see.
It is true that there are some problems modelling humans as von Neumann–Morgenstern agents—but that’s no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.
WARP can’t be used to predict a human’s behavior in even the most trivial real situations. That makes it a “spherical cow” because it’s a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy.
That sounds to me uncannily similar to, “it is true that there are some problems modeling celestial movement using crystal spheres—but that’s no reason to throw out the concept of celestial bodies moving in perfect circles.”
There is an obvious surface similarity—but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
How so? I’m pointing out that the only actual intelligent agents we know of don’t actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. “crystal spheres”.
Economic agency/utility models may have the Platonic purity of crystal spheres, but:
We know for a fact they’re not what actually happens in reality, and
They have to be tortured considerably to make them “predict” what happens in reality.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Computers can model anything. That is because they are universal. It doesn’t matter that computers work differently inside from the thing they are modelling.
Just the same applies to partially-recursive utility functions—they are a universal modelling tool—and can model any computable agent.
Not at all. I’m saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human’s behavior with a utility function, than it does to describe a human with interests and tolerances.
Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:
Right. That very universality is exactly what makes them a poor model of human intelligence: they don’t concentrate probability space in the same way, and therefore don’t compress well.
Sure—but whay you claimed was a “spherical cow” was “ordinal utilities” which is a totally different concept.
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don’t constitute evidence for the usefulness of ordinal utility.
That’s treating the “Weak Axiom of Revealed Preference” as the “weakest form” of revealed preference. However, that is not something that I consider to be correct.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
That sounds interesting. Is there anything serious about it available online? Every paper I could find was behind a paywall.
Ainslie’s précis of his book Breakdown of Will
Yvain’s Less Wrong post “Applied Picoeconomics”
Muchas gracias.
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
Then why did you even bring this up in the first place?
Probably for the same reason you threadjacked to talk about PCT ;-)
This definition sounds dangerously vacuous to me.
Of course, you can always give some consistent parametrization of (agent,choice,situation) triplets so that choice C made by agent A in situation S is always maximal among all available choices. If you call this function “utility”, then it is mathematically trivial that “Agents always maximize utility.” However, the usefulness of this approach is very low without additional constraints on the utility function.
I’d be really curious to see some pointers to the “maximizing theory” you think survived a 50 years of “strong scrutiny”.
The obvious way to combine the two systems—tolerance and utility—is to say that stimuli that exceed our tolerances prompt us to ask questions about how to solve a problem, and utility calculations answer those questions. This is not an original idea on my part, but I do not remember where I read about it.
The one that appears to maximize utility after a brief period of analysis. For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.
A lack of acceptable alternatives leads to stress, which (a) broadens the range of acceptable outcomes, and (b) motivates future analysis about how to avoid similar situations in the future. For example, I want ice cream; my ice cream satisfaction index is well below tolerance; unfortunately, I am in the desert. I find this situation unpleasant, and eventually reconcile myself to the fact that my ice cream satisfaction level will remain below what was previously thought of as ‘minimum’ tolerance for some time, however, upon returning to civilization, I will have a lower tolerance for ‘desert-related excursions’ and may attempt to avoid further trips through the desert.
Note that ‘minimum’ tolerance refers to the minimum level that will lead to casual selection of an acceptable alternative, rather than the minimum level that allows my decision system to continue functioning.
Actually, I’d tend to say that you are not so much maximizing the utility of your ice cream choice, as you are ensuring that your expected satisfaction with your choice is within tolerance.
To put it another way, it’s unlikely that you’ll actually weigh price, cost, and taste, in some sort of unified scoring system.
Instead, what will happen is that you’ll consider options that aren’t already ruled out by cached memories (e.g. you hate that flavor), and then predict whether that choice will throw any other variables out of tolerance. i.e., “this one costs too much… those nuts will give me indigestion… that’s way too big for my appetite… this one would taste good, but it just doesn’t seem like what I really want...”
Yes, some people do search for the “best” choice in certain circumstances, and would need to exhaustively consider the options in those cases. But this is not a matter of maximizing some world-state-utility, it is simply that each choice is also being checked against a, “can I be certain I’ve made the best choice yet?” perception.
Even when we heavily engage our logical minds in search of “optimum” solutions, this cognition is still primarily guided by these kinds of asynchronous perceptual checks, just ones like, “Is this formula really as elegant as I want it to be?” instead.
Very interesting. There’s a lot of truth in what you say. If anyone reading this can link to experiments or even experimental designs that try to figure out when people typically rely on tolerances vs. utilities, I’d greatly appreciate it.
Y’know, most people probably don’t, and at times I certainly do take actions based entirely on nested tolerance-satisfaction. When I’m consciously aware that I’m making a decision, though, I tend to weigh the utilities, even for a minor choice like ice cream flavor. This may be part of why I felt estranged enough from modern society in the first place to want to participate in a blog like Less Wrong.
OK, so you’ve hit on the behavioral mechanism that helps me decide how much time I want to spend on a decision...90 seconds or so is usually the upper bound on how much time I will comfortably and casually spend on selecting an ice cream flavor. If I take too much time to decide, then my “overthinking” tolerance is exceeded and alarm bells go off; if I feel too uncertain about my decision, then my “uncertainty” tolerance is exceeded and alarm bells go off; if neither continuing to think about ice cream nor ending my thoughts about ice cream will silence both alarm bells, then I feel stress and broaden my tolerance and try to avoid the situation in the future, probably by hiring a really good psychotherapist.
But that’s just the criteria for how long to think...not for what to think about. While I’m thinking about ice cream, I really am trying to maximize my ice-cream-related world-state-utility. I suspect that other people, for somewhat more important decisions, e.g., what car shall I buy, behave the same way—it seems a bit cynical to me to say that people make the decision to buy a car because they’ve concluded that their car-buying analysis is sufficiently elegant; they probably buy the car or walk out of the dealership when they’ve concluded that the action will very probably significantly improve their car-related world-state-utility.
And how often, while doing this, do you invent new ice cream options in an effort to increase the utility beyond that offered by the available choices?
How many new ice cream flavors have you invented, or decided to ask for mixed together?
So now you say, “Ah, but it would take too long to do those things.” And I say, “Yep, there goes another asynchronous prediction of an exceeded perceptual tolerance.”
“Okay,” you say, “so, I’m a bounded utility calculator.”
“Really? Okay, what scoring system do you use to arrive at a combined rating on all these criteria that you’re using? Do you even know what criteria you’re using?”
Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?
The experimental data says that when it comes to making these estimates, your brain is massively subject to priming and anchoring effects—so your “utility” being some kind of rational calculation is probably illusory to start with.
I was referring to the perceptions involved in a task like computer programming, not car-buying.
Part of the point is that every task has its own set of regulating perceptions.
They do it when they find a car that leads to an”acceptable “satisfaction” level.
Part of my point about things like time, elegance, “best”-ness, etc. though, is that they ALL factor into what “acceptable” means.
“Satisfaction”, in other words, is a semi-prioritized measurement against tolerances on ALL car-buying-related perceptual predictions that get loaded into a person’s “working memory” during the process.
Aside: I have partaken of the garlic ice-cream, and lo, it is good.
Are you joking? I’m curious!
I’m not joking, either about its existence or its gustatory virtues. I’m trying to remember where the devil I had it; ah yes, these fine folks served it at Taste of Edmonton (a sort of outdoor food-fair with samples from local restaurants).
Theory: you don’t actually enjoy garlic ice cream. You just pretend to in order to send an expensive signal that you are not a vampire.
If I ever encounter it I shall be sure to have a taste!
I’m not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.
Obviously we aren’t rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don’t dispute its validity. Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.
As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I’ve already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is “illusory.”
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)