I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
I just built a simple natural deduction theorem prover for my project in AI class
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn’t even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.
But this is relevant … how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.
More specifically, applying the most specific matching rules, where specificity follows logical implication… which happens to be partially-ordered.
Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.
The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.
In a human being, ambiguous rules get “kicked upstairs” for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.
However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is “better” (to a human) and see no need for disambiguation in cases that were actually ambiguous.
(Even humans’ second-stage disambiguation doesn’t natively run as a linearization: barter trades need not be equivalent to cash ones.)
Anyway, the specific analogy with predicate dispatch, is that you really can’t reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans’ native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.
Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)
One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called “business rules”—and in such systems it’s important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there’s an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.
And in either of those cases, choosing a linearization and pretending the ambiguity doesn’t exist is the exactly wrong thing to do.
(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)