Beautiful idea!
Is a Wiki separate from Wikipedia needed?
Similar problem: One thing I run in to often on Wikipedia is entries that use the field’s particular mathematical notation for no reason other than particular symbols and expressions are the jargon of the field. They get in the way of understanding what the entry is saying, though.
Similar problem is there seem to be academic papers that have practical applications and yet the papers are written to be as unclear as possible—perhaps to take on that “important” sheen, perhaps simply because the authors are deep in their own jargon and assume all readers know everything they know. Consider papers in the AI field. :)
It sounds like you’re pegging “intelligence” to mean what I’d call a “universal predictor”. That is, something that can predict the future (or an unknown) given some information. And that it can do so given a variety of types of sets of unknowns, where “variety” involves more than a little hand-waving.
Therefore, something that catches a fly ball (“knowing” the rules of parabolic movement) can predict the future, but is not particularly “intelligent” if that’s all it can do. It may be even a wee bit more “intelligent” if it can also predict where a mortar shell lands. It is even more “intelligent” if it predicts how to land a rocket on the moon. It is even more “intelligent” if it predicts the odds that any given cannon ball will land on a fort’s walls. Etc.
I agree with Brain that this is a narrow definition of “intelligence”. But that doesn’t stop it from being an appropriate goal for AI at this time. That the word, “intelligence” is chosen to denote this goal seems more a result of culture than anything else. AI people go through a filter that extols “intelligence”. So … (One is reminded of many years ago when some AI thinkers had the holy grail of creating a machine that would be able to do the highest order of thinking the AI thinkers could possibly imagine: proving theorems. Coincidently, this is what these thinkers did for a living.)
Here’s a thought on pinning down that word, “variety”.
First, it seems to me that a “predictor” can be optimized to predict one thing very well. Call it a “tall” predictor (accuracy in Y, problem-domain-ness in X) Or it can be built to predict a lot of things rather poorly, but better than a coin. Call it a “flat” predictor. The question is: How efficient is it? How much prediction-accuracy comes out of this “predictor” given the resources it consumes? Or, using the words, “tall” and “flat” graphically, what’s the surface area covered by the predictor, given a fixed amount of resources?
Would not “intelligence”, as you mean it, be slightly more accurately defined as how efficient a predictor is and, uh, it’s gotta be really wide or we ignore it?