For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.
It seems to me that what has actually been shown is that when people think abstractly (i.e. “far”) about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility.
However, when people actually act (using “near” thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread.
What’s more, even their “far” calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating “utility” according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of “best car” is going to vary from one day to the next, based on priming and other factors.)
In the very best case scenario for utility maximization, we aren’t even all that motivated to go out and maximize utility: it’s still more like playing, “pick the best perceived-available option”, which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn’t be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent—even if said agent were of only-human intelligence.
standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers.
For me, I’m not sure that “rational” and “utility maximizer” belong in the same sentence. ;-)
In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don’t mix under real world conditions. You can’t measure a human’s perception of “utility” on just a single axis!
For me, I’m not sure that “rational” and “utility maximizer” belong in the same sentence. ;-)
In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don’t mix under real world conditions.
You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts.
You can’t measure a human’s perception of “utility” on just a single axis!
And you can’t (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
And you can’t (consistently) make a decision without comparing the alternatives along a single axis.
And what makes you think humans are any good at making consistent decisions?
The experimental evidence clearly says we’re not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don’t like the taste! Prime us with a number, and it changes what we’ll say we’re willing to pay for something utterly unrelated to the number.
Human beings are inconsistent by default.
And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
(Which is probably a factor in why smarter, more “rational” people are often less happy than their less-rational counterparts.)
In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed’s text books it is this function which can be appropriately described as a ‘rational utility function’.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
I am glad that you included the scare quotes around ‘rationally’. It is ‘rational’ to do what is going to get the best results. It is important to realise the difference between ‘sucking at making linearized spock-like decisions’ and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.
For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn’t guaranteed to have a total (i.e. linear) ordering.
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
True enough.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
But the information loss is “just in time”—it doesn’t take place until actually making a decision. The information about utilities that is “stored” is a mapping from states-of-the-world to ordinal utilities of each “result”. That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there’s probably a computer science doctorate in it for you, if not a math Nobel.
(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware—i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it’s just injecting noise into the selection process.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
I just built a simple natural deduction theorem prover for my project in AI class
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
The short answer is that human cognition is a mess. We don’t want to reproduce all the screw-ups in an intelligent machine—and what you are talking about lookss like one of the mistakes.
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
It loses information about human values, replacing them with noise in regions where a human would need to “think things over” to know what they think… unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
We really don’t want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion—and would steamroller the original human race pretty quickly. So, the “lost information” you are talking about is hopefully not going to be there in the first place.
Better to model humans and their goals as a part of the environment.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
And you can’t (consistently) make a decision without comparing the alternatives along a single axis.
And what makes you think humans are any good at making consistent decisions?
Nothing make me think that. I don’t even care. That is the business of people like Tversky and Kahneman.
They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.
Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.
Now, I’m not saying “more expressiveness is always better”, because, being human, I have the ability to value things non-fungibly. ;-)
However, in any context where we wish to be able to mathematically represent human preferences—and where lives are on the line by doing so—we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.
That’s why I consider the “economic games assumption” to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.
Heck, I’ll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
Heck, I’ll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.
Theorists are not as ignorant or mathematically naive as you seem to imagine.
“Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom.”
As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they’re begging the question, relative to this discussion.)
The basis of using utilities is that you can consider agent’s possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that—provided that your decision process is computable in the first place. That’s a pretty general framework—about the only assumption that can be argued with is its quantising of spacetime.
The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.
The basis of using utilities is that you can consider agent’s possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that—provided that your decision process is computable in the first place.
And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).
Humans don’t compute utility, then make a decision. Heck, we don’t even “make decisions” unless there’s some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!
This is a major (if not the major) “impedance mismatch” between linear “rationality” and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it’s really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.
The basis of using utilities is that you can consider agent’s possible
actions, assign real-valued utilities to them, and then choose the one
with the most utility. If you can use a utility function built from a
partially-recursive language, then you can always do that—provided
that your decision process is computable in the first place.
And that is not what humans do (although we can of course lamely
attempt to mimic that approach by trying to turn off all our parallel
processing and pretending to be a cheap sequential computer instead).
There’s nothing serial about utility maximisation!
...and it really doesn’t matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
There’s nothing serial about utility maximisation!
I didn’t say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren’t set up to do it in parallel.
...and it really doesn’t matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
There’s nothing serial about utility maximisation!
I didn’t say there was. I said that humans needed to switch to slow serial
processing in order to do it, because our brains aren’t set up to do it in parallel.
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
When humans don’t consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.
That type of general framework can model the behaviour of any
computable agent.
Great! So you can show me how to use a utility function to model being
indecisive or uncertain, then? ;-)
It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...
The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?
AFAICT, the entire notion of a cognitive architecture based on “pick options by utility” is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)
(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn’t have to know it has all the options, unless it has a low tolerance for “not knowing all the options”.)
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices?
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Possible, I’m not arguing that a utility maximizing agent would be simpler,
Good. ;-)
Only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive.
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
Not if it’s limited to motor fibers, it doesn’t. You’re still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an “action” can be something like choosing a goal.
If you still don’t see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don’t know what else to say.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You may find it useful to compare with a chess or go computer.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
And I’ve given you such a model, which you’ve steadfastly refused to actually
“wrap” in this way, but instead you just keep asserting that it can be done.
If it’s so simple, why not do it and prove me wrong?
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
I have previously described the “wrapping” in question in some detail here.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Simply wrap the I/O of the non-utility model, and then assign the (possibly compound) action the agent will actually take in each timestep utility 1 and assign all other actions a utility 0 - and then take the highest utility action in each timestep.
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
It seems to me that what has actually been shown is that when people think abstractly (i.e. “far”) about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility.
However, when people actually act (using “near” thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread.
What’s more, even their “far” calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating “utility” according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of “best car” is going to vary from one day to the next, based on priming and other factors.)
In the very best case scenario for utility maximization, we aren’t even all that motivated to go out and maximize utility: it’s still more like playing, “pick the best perceived-available option”, which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn’t be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent—even if said agent were of only-human intelligence.
For me, I’m not sure that “rational” and “utility maximizer” belong in the same sentence. ;-)
In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don’t mix under real world conditions. You can’t measure a human’s perception of “utility” on just a single axis!
You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts.
And you can’t (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
And what makes you think humans are any good at making consistent decisions?
The experimental evidence clearly says we’re not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don’t like the taste! Prime us with a number, and it changes what we’ll say we’re willing to pay for something utterly unrelated to the number.
Human beings are inconsistent by default.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
(Which is probably a factor in why smarter, more “rational” people are often less happy than their less-rational counterparts.)
In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed’s text books it is this function which can be appropriately described as a ‘rational utility function’.
I am glad that you included the scare quotes around ‘rationally’. It is ‘rational’ to do what is going to get the best results. It is important to realise the difference between ‘sucking at making linearized spock-like decisions’ and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.
For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn’t guaranteed to have a total (i.e. linear) ordering.
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
True enough.
But the information loss is “just in time”—it doesn’t take place until actually making a decision. The information about utilities that is “stored” is a mapping from states-of-the-world to ordinal utilities of each “result”. That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there’s probably a computer science doctorate in it for you, if not a math Nobel.
If you can do that, I’ll happily admit being wrong, and steal your algorithm for my predicate dispatch implementation.
(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware—i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it’s just injecting noise into the selection process.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
The short answer is that human cognition is a mess. We don’t want to reproduce all the screw-ups in an intelligent machine—and what you are talking about lookss like one of the mistakes.
It loses information about human values, replacing them with noise in regions where a human would need to “think things over” to know what they think… unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
We really don’t want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion—and would steamroller the original human race pretty quickly. So, the “lost information” you are talking about is hopefully not going to be there in the first place.
Better to model humans and their goals as a part of the environment.
Perplexed answered this question well.
Nothing make me think that. I don’t even care. That is the business of people like Tversky and Kahneman.
They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.
Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.
Now, I’m not saying “more expressiveness is always better”, because, being human, I have the ability to value things non-fungibly. ;-)
However, in any context where we wish to be able to mathematically represent human preferences—and where lives are on the line by doing so—we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.
That’s why I consider the “economic games assumption” to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.
Heck, I’ll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.
Theorists are not as ignorant or mathematically naive as you seem to imagine.
You are talking about the independence axiom...?
You can just drop that, you know:
“Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom.”
As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they’re begging the question, relative to this discussion.)
The basis of using utilities is that you can consider agent’s possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that—provided that your decision process is computable in the first place. That’s a pretty general framework—about the only assumption that can be argued with is its quantising of spacetime.
The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.
And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).
Humans don’t compute utility, then make a decision. Heck, we don’t even “make decisions” unless there’s some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!
This is a major (if not the major) “impedance mismatch” between linear “rationality” and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it’s really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.
There’s nothing serial about utility maximisation!
...and it really doesn’t matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
I didn’t say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren’t set up to do it in parallel.
Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
No, I said that’s what a human would have to do in order to actually calculate utilities, since we don’t have utility-calculating hardware.
Ah—OK, then.
When humans don’t consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
It depends on the utility-maximizing framework you are talking about—some are more general than others—and some are really very general.
Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.
It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...
The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?
AFAICT, the entire notion of a cognitive architecture based on “pick options by utility” is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)
(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn’t have to know it has all the options, unless it has a low tolerance for “not knowing all the options”.)
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Good. ;-)
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
Not if it’s limited to motor fibers, it doesn’t. You’re still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an “action” can be something like choosing a goal.
If you still don’t see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don’t know what else to say.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
Yes, at the level of a giant look-up table. At that point it is not a useful abstraction.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Thanks, but I’ll pass.
(from the comment you linked)
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
I didn’t ignore non-motor actions—that is why I wrote “mostly”.