In addition, other experiments show that people who make choices in “maximizer” style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed’s text books it is this function which can be appropriately described as a ‘rational utility function’.
Of course. But that’s not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more “rationally” you weigh a decision, the less likely you are to be happy with the results.
I am glad that you included the scare quotes around ‘rationally’. It is ‘rational’ to do what is going to get the best results. It is important to realise the difference between ‘sucking at making linearized spock-like decisions’ and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.
For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn’t guaranteed to have a total (i.e. linear) ordering.
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
True enough.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
But the information loss is “just in time”—it doesn’t take place until actually making a decision. The information about utilities that is “stored” is a mapping from states-of-the-world to ordinal utilities of each “result”. That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there’s probably a computer science doctorate in it for you, if not a math Nobel.
(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware—i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it’s just injecting noise into the selection process.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
I just built a simple natural deduction theorem prover for my project in AI class
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
The short answer is that human cognition is a mess. We don’t want to reproduce all the screw-ups in an intelligent machine—and what you are talking about lookss like one of the mistakes.
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
It loses information about human values, replacing them with noise in regions where a human would need to “think things over” to know what they think… unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
We really don’t want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion—and would steamroller the original human race pretty quickly. So, the “lost information” you are talking about is hopefully not going to be there in the first place.
Better to model humans and their goals as a part of the environment.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed’s text books it is this function which can be appropriately described as a ‘rational utility function’.
I am glad that you included the scare quotes around ‘rationally’. It is ‘rational’ to do what is going to get the best results. It is important to realise the difference between ‘sucking at making linearized spock-like decisions’ and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.
For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn’t guaranteed to have a total (i.e. linear) ordering.
What I’m saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable “utility” function necessarily loses information from that preference ordering.
That’s why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It’s insane.
Am I correct thinking that you welcome money pumps?
A partial order isn’t the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human’s preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
If conscious processing is required to do that, you probably don’t want to disambiguate all possible tortures where you’re not really sure which one is worse, exactly.
(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you’d prefer to have your fingernails ripped out with?)
Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!
These problems go away if you leave the preference graph ambiguous (wherever it’s currently ambiguous), because then you can definitely avoid simulating conscious experiences.
(Note that this also isn’t a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
True enough.
But the information loss is “just in time”—it doesn’t take place until actually making a decision. The information about utilities that is “stored” is a mapping from states-of-the-world to ordinal utilities of each “result”. That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.
You don’t convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment—the moment when you have to make the decision.
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there’s probably a computer science doctorate in it for you, if not a math Nobel.
If you can do that, I’ll happily admit being wrong, and steal your algorithm for my predicate dispatch implementation.
(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware—i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it’s just injecting noise into the selection process.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
It doesn’t literally lose information—since the information inputs are sensory, and they can be archived as well as ever.
The short answer is that human cognition is a mess. We don’t want to reproduce all the screw-ups in an intelligent machine—and what you are talking about lookss like one of the mistakes.
It loses information about human values, replacing them with noise in regions where a human would need to “think things over” to know what they think… unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
We really don’t want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion—and would steamroller the original human race pretty quickly. So, the “lost information” you are talking about is hopefully not going to be there in the first place.
Better to model humans and their goals as a part of the environment.
Perplexed answered this question well.