Looking from mostly-outside the field (in which I’ve been interested in for decades but haven’t had opportunity to be seriously involved), it seems to me now that part of the problem is that there are so many different possible goals and levels of achievement which might be called “AI” but which won’t really satisfy most people once the chrome wears off. (Every new computing technology seemed to be calling itself AI for awhile after computing first became widely discussed—just doing arithmetic was once “clearly” the domain of intelligence; later, it only applied to really clever things like expert systems [/sarcasm]. I haven’t seen so much of that lately, so either the common understanding of computing has become more realistic or perhaps I’ve just trained myself to ignore it.)
For instance:
Let’s say I managed to hook together an English parsing engine, a common-sense engine/database (such as Cyc), and an inference engine in such a way as to be able to take factual knowledge (such as that which might be expressed in English as “Woozle usually goes shopping in the mornings on Mondays and Fridays” plus sensed information such as the current day/time and my apparent or explicit absence from my desk) and respond to natural-language questions like “Where is Woozle?” with something like “Woozle is not here right now; Woozle has probably gone shopping and should be back by midday” or even “Woozle has never returned home from Friday shopping later than 11:23, And most likely* will be back by 10:37.” (Where the software understands that “most likely” is a reasonable expression to use for, say, a 2-sigma variance or something like that.)
Would I have created AI? Or just a really useful piece of software? (Either way, why aren’t there more programs which pull together these techniques? Or am I making too many assumptions about what an “inference engine” can do? According to Wikipedia, there are even “reasoning engines”, which sounds extremely shiny...)
If that’s not enough, then what’s left? Creativity? Inductive reasoning?
In other words… where are we? Where’s the roadmap of what’s been done and what needs to be done?
Looking from mostly-outside the field (in which I’ve been interested in for decades but haven’t had opportunity to be seriously involved), it seems to me now that part of the problem is that there are so many different possible goals and levels of achievement which might be called “AI” but which won’t really satisfy most people once the chrome wears off. (Every new computing technology seemed to be calling itself AI for awhile after computing first became widely discussed—just doing arithmetic was once “clearly” the domain of intelligence; later, it only applied to really clever things like expert systems [/sarcasm]. I haven’t seen so much of that lately, so either the common understanding of computing has become more realistic or perhaps I’ve just trained myself to ignore it.)
For instance:
Let’s say I managed to hook together an English parsing engine, a common-sense engine/database (such as Cyc), and an inference engine in such a way as to be able to take factual knowledge (such as that which might be expressed in English as “Woozle usually goes shopping in the mornings on Mondays and Fridays” plus sensed information such as the current day/time and my apparent or explicit absence from my desk) and respond to natural-language questions like “Where is Woozle?” with something like “Woozle is not here right now; Woozle has probably gone shopping and should be back by midday” or even “Woozle has never returned home from Friday shopping later than 11:23, And most likely* will be back by 10:37.” (Where the software understands that “most likely” is a reasonable expression to use for, say, a 2-sigma variance or something like that.)
Would I have created AI? Or just a really useful piece of software? (Either way, why aren’t there more programs which pull together these techniques? Or am I making too many assumptions about what an “inference engine” can do? According to Wikipedia, there are even “reasoning engines”, which sounds extremely shiny...)
If that’s not enough, then what’s left? Creativity? Inductive reasoning?
In other words… where are we? Where’s the roadmap of what’s been done and what needs to be done?