I think you have homed in exactly on the place where the disagreement is located. I am glad we got here so quickly (it usually takes a very long time, where it happens at all).
Yes, it is the fact that “weak constraint” systems have (supposedly) the property that they are making the greatest possible attempt to find a state of mutual consistency among the concepts, that leads to the very different conclusions that I come to, versus the conclusions that seem to inhere in logical approaches to AGI. There really is no underestimating the drastic difference between these two perspectives: this is not just a matter of two possible mechanisms, it is much more like a clash of paradigms (if you’ll forgive a cliche that I know some people absolutely abhor).
One way to summarize the difference is by imagining a sequence of AI designs, with progressive increases in sophistication. At the beginning, the representation of concepts is simple, the truth values are just T and F, and the rules for generating new theorems from the axioms are simple and rigid.
As the designs get better various new features are introduced … but one way to look at the progression of features is that constraints between elements of the system get more widespread, and more subtle in nature, as the types of AI become better and better.
An almost trivial example of what I mean: when someone builds a real-time reasoning engine in which there has to be a strict curtailment of the time spent doing certain types of searches in the knowledge base, a wise AI programmer will insert some sanity checks that kick in after the search has to be curtailed. The sanity checks are a kind of linkage from the inference being examined, to the rest of the knowledge that the system has, to see if the truncated reasoning left the system in a state where it concluded something that is patently stupid. These sanity checks are almost always extramural to the logical process—for which read: they are glorified kludges—but in a real world system they are absolutely vital. Now, from my point of view what these sanity checks do is act as weak constraints on one little episode in the behavior of the system.
Okay, so if you buy my suggestion that in practice AI systems become better, the more that they allow the little reasoning episodes to be connected to the rest of system by weak constraints, then I would like to go one step further and propose the following:
1) As a matter of fact, you can build AI systems (or, parts of AI systems) that take the whole “let’s connect everything up with weak constraints” idea to an extreme, throwing away almost everything else (all the logic!) and keeping only the huge population of constraints, and something amazing happens: the system works better that way. (An old classic example, but one which still has lessons to teach, is the very crude Interactive Activation model of word recognition. Seen in its historical context it was a bombshell, because it dumped all the procedural programming that people had thought was necessary to do word recognition from features, and replaced it with nothing-but-weak-constraints …. and it worked better than any procedural program was able to do.)
2) This extreme attitude to the power of weak constraints comes with a price: you CANNOT have mathematical assurances or guarantees of correct behavior. Your new weak-constraint system might actually be infinitely more reliable and stable than any of the systems you could build, where there is a possibility to get some kind of mathematical guarantees of correctness or convergence, but you might never be able to prove that fact (except with some general talk about the properties of ensembles).
All of that is what is buried in the phrase I stole from Yann LeCun: the “unreasonable effectiveness” idea. These systems are unreasonably good at doing what they do. They shouldn’t be so good. But they are.
As you can imagine, this is such a huge departure from the traditional way of thinking in AI, that many people find it completely alien. Believe it or not, I know people who seem willing to go to any lengths to destroy the credibility of someone who suggests the idea that mathematical rigor might be a bad thing in AI, or that there are ways of doing AI that are better than the status quo, but which involve downgrading the role of mathematics to just technical-support level, rather than primacy.
--
On your last question, I should say that I was only referring to the fact that in systems of weak constraints, there is extreme independence between the constraints, and they are all relatively small, so it is hard for an extremely inconsistent ‘belief’ or ‘fact’ to survive without being corrected. This is all about the idea of “single point of failure” and its antithesis.
I think you have homed in exactly on the place where the disagreement is located. I am glad we got here so quickly (it usually takes a very long time, where it happens at all).
Yes, it is the fact that “weak constraint” systems have (supposedly) the property that they are making the greatest possible attempt to find a state of mutual consistency among the concepts, that leads to the very different conclusions that I come to, versus the conclusions that seem to inhere in logical approaches to AGI. There really is no underestimating the drastic difference between these two perspectives: this is not just a matter of two possible mechanisms, it is much more like a clash of paradigms (if you’ll forgive a cliche that I know some people absolutely abhor).
One way to summarize the difference is by imagining a sequence of AI designs, with progressive increases in sophistication. At the beginning, the representation of concepts is simple, the truth values are just T and F, and the rules for generating new theorems from the axioms are simple and rigid.
As the designs get better various new features are introduced … but one way to look at the progression of features is that constraints between elements of the system get more widespread, and more subtle in nature, as the types of AI become better and better.
An almost trivial example of what I mean: when someone builds a real-time reasoning engine in which there has to be a strict curtailment of the time spent doing certain types of searches in the knowledge base, a wise AI programmer will insert some sanity checks that kick in after the search has to be curtailed. The sanity checks are a kind of linkage from the inference being examined, to the rest of the knowledge that the system has, to see if the truncated reasoning left the system in a state where it concluded something that is patently stupid. These sanity checks are almost always extramural to the logical process—for which read: they are glorified kludges—but in a real world system they are absolutely vital. Now, from my point of view what these sanity checks do is act as weak constraints on one little episode in the behavior of the system.
Okay, so if you buy my suggestion that in practice AI systems become better, the more that they allow the little reasoning episodes to be connected to the rest of system by weak constraints, then I would like to go one step further and propose the following:
1) As a matter of fact, you can build AI systems (or, parts of AI systems) that take the whole “let’s connect everything up with weak constraints” idea to an extreme, throwing away almost everything else (all the logic!) and keeping only the huge population of constraints, and something amazing happens: the system works better that way. (An old classic example, but one which still has lessons to teach, is the very crude Interactive Activation model of word recognition. Seen in its historical context it was a bombshell, because it dumped all the procedural programming that people had thought was necessary to do word recognition from features, and replaced it with nothing-but-weak-constraints …. and it worked better than any procedural program was able to do.)
2) This extreme attitude to the power of weak constraints comes with a price: you CANNOT have mathematical assurances or guarantees of correct behavior. Your new weak-constraint system might actually be infinitely more reliable and stable than any of the systems you could build, where there is a possibility to get some kind of mathematical guarantees of correctness or convergence, but you might never be able to prove that fact (except with some general talk about the properties of ensembles).
All of that is what is buried in the phrase I stole from Yann LeCun: the “unreasonable effectiveness” idea. These systems are unreasonably good at doing what they do. They shouldn’t be so good. But they are.
As you can imagine, this is such a huge departure from the traditional way of thinking in AI, that many people find it completely alien. Believe it or not, I know people who seem willing to go to any lengths to destroy the credibility of someone who suggests the idea that mathematical rigor might be a bad thing in AI, or that there are ways of doing AI that are better than the status quo, but which involve downgrading the role of mathematics to just technical-support level, rather than primacy.
--
On your last question, I should say that I was only referring to the fact that in systems of weak constraints, there is extreme independence between the constraints, and they are all relatively small, so it is hard for an extremely inconsistent ‘belief’ or ‘fact’ to survive without being corrected. This is all about the idea of “single point of failure” and its antithesis.