True, true. I wish I had something more rigorous, but if I did then I would be writing a paper on it right now! All I have are some vauge intuitive ideas.
My intuition is that the key here is good knowledge representation systems. First Order Predicate Logic (FOPL) is good for something rather different than representing knowledge about the real world; it’s good at representing statements about clean, abstract entities, namely about the truths of a particular formal system.
I think that my first intuition about the difference between statements of some formal language, and representations which would be useful to describe the real world is that FOPL statements don’t come with a natural topology, and I think that one ought to have something like that. I think that a fairly key idea in the mathematics which underlies physics (calculus, etc) is the idea that we can approximate something. This is ultimately formalized by the metric on the real numbers, or on R^n. I seem to remember that those who invented calculus started by thinking about cutting things into very small slices, or about approximating a curve by small line segments.
I think that one needs some way of expressing the fact that two statements are conceptually close, and I don’t think that something like mutual entropy or correlation of random variables does the trick. Why not? Well, think about the way a Taylor series approximates a curve (for example, the way it approximates the solution to a differential equation). You build up the representation of the (quite complex) real world thingy from a series of rather simple elements; the monomials (1, x, x^2, etc ) by taking a limit. Do we see probability theory allowing us to do this? Can I take a series of simple FOPL statements and approximate a messy, real world thing like a “fuzzy category” with them? I don’t think so. Well, at least I’ve read some AI books and they all seem to limit themselves to encoding a real world concept as a particular FOPL statement, e.g.
Apple(X) <==> [ Green(X) or Red(X) ] and Edible(X) and Size(X, medium), etc.
This, I think, is the equivalent of thinking that the motion of a particle is ALWAYS some actual polynomial. In general, the motion of a particle is given by the solution to some system of PDEs or ODEs, which (by various theorems of analysis) (under nice circumstances) have a solution (on some open subset of R or R^n) which you can ultimately approximate by a polynomial.
I think that we’re missing a whole chunk of theory about how you can iteratively build complex representations from simple ones, and how you can take limits of conceptual representations, and then how you can manipulate those limits.
@Brandon Reinhart:
True, true. I wish I had something more rigorous, but if I did then I would be writing a paper on it right now! All I have are some vauge intuitive ideas.
My intuition is that the key here is good knowledge representation systems. First Order Predicate Logic (FOPL) is good for something rather different than representing knowledge about the real world; it’s good at representing statements about clean, abstract entities, namely about the truths of a particular formal system.
I think that my first intuition about the difference between statements of some formal language, and representations which would be useful to describe the real world is that FOPL statements don’t come with a natural topology, and I think that one ought to have something like that. I think that a fairly key idea in the mathematics which underlies physics (calculus, etc) is the idea that we can approximate something. This is ultimately formalized by the metric on the real numbers, or on R^n. I seem to remember that those who invented calculus started by thinking about cutting things into very small slices, or about approximating a curve by small line segments.
I think that one needs some way of expressing the fact that two statements are conceptually close, and I don’t think that something like mutual entropy or correlation of random variables does the trick. Why not? Well, think about the way a Taylor series approximates a curve (for example, the way it approximates the solution to a differential equation). You build up the representation of the (quite complex) real world thingy from a series of rather simple elements; the monomials (1, x, x^2, etc ) by taking a limit. Do we see probability theory allowing us to do this? Can I take a series of simple FOPL statements and approximate a messy, real world thing like a “fuzzy category” with them? I don’t think so. Well, at least I’ve read some AI books and they all seem to limit themselves to encoding a real world concept as a particular FOPL statement, e.g.
Apple(X) <==> [ Green(X) or Red(X) ] and Edible(X) and Size(X, medium), etc.
This, I think, is the equivalent of thinking that the motion of a particle is ALWAYS some actual polynomial. In general, the motion of a particle is given by the solution to some system of PDEs or ODEs, which (by various theorems of analysis) (under nice circumstances) have a solution (on some open subset of R or R^n) which you can ultimately approximate by a polynomial.
I think that we’re missing a whole chunk of theory about how you can iteratively build complex representations from simple ones, and how you can take limits of conceptual representations, and then how you can manipulate those limits.