I’m a firstyear AI student, and we are currently in the middle of exploring AI ‘history’. Ofcourse I don’t know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us.
People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years.
And here we are, still struggling with the seemingly simplest things such as computer vision etc.
The problem is they came across some hard problems which they can’t really ignore.
One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf
One of them is the common sense problem.
Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning.
And machine learning is a thing which we can’t seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish.
So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec’s paradox ).
Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.
I’m a firstyear AI student, and we are currently in the middle of exploring AI ‘history’. Ofcourse I don’t know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc.
The problem is they came across some hard problems which they can’t really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem.
Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can’t seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec’s paradox ).
Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.