I’m a firstyear AI student, and we are currently in the middle of exploring AI ‘history’. Ofcourse I don’t know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us.
People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years.
And here we are, still struggling with the seemingly simplest things such as computer vision etc.
The problem is they came across some hard problems which they can’t really ignore.
One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf
One of them is the common sense problem.
Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning.
And machine learning is a thing which we can’t seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish.
So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec’s paradox ).
Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.
And machine learning is a thing which we can’t seem to get very far with.
Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.
but deep learning is really a new thing under the sun.
On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90′s in the form of convolutional neural nets, and they had another burst of popularity in 2006.
Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress.
This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.
It’s possible. We’re an example of that. The question is if it’s humanly possible.
There’s a common idea of an AI being able to make another twice as smart as itself, which could make another twice as smart as itself, etc. causing an exponential increase in intelligence. But it seems just as likely that an AI could only make one half as smart as itself, in which case we’ll never even be able to get the first human-level AI.
The example you give to prove plausibility is also a counterexample to the argument you make immediately afterwards. We know that less-intelligent or even non-intelligent things can produce greater intelligence because humans evolved, and evolution is not intelligent.
It’s more a matter of whether we have enough time to drudge something reasonable out of the problem space. If we were smarter we could search it faster.
Evolution is an optimization process. It might not be “intelligent” depending on your definition, but it’s good enough for this. Of course, that just means that a rather powerful optimization process occurred just by chance. The real problem is, as you said, it’s extremely slow. We could probably search it faster, but that doesn’t mean that we can search it fast.
I’m a firstyear AI student, and we are currently in the middle of exploring AI ‘history’. Ofcourse I don’t know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc.
The problem is they came across some hard problems which they can’t really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem.
Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can’t seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec’s paradox ).
Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.
Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.
On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90′s in the form of convolutional neural nets, and they had another burst of popularity in 2006.
Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress.
This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.
It’s possible. We’re an example of that. The question is if it’s humanly possible.
There’s a common idea of an AI being able to make another twice as smart as itself, which could make another twice as smart as itself, etc. causing an exponential increase in intelligence. But it seems just as likely that an AI could only make one half as smart as itself, in which case we’ll never even be able to get the first human-level AI.
The example you give to prove plausibility is also a counterexample to the argument you make immediately afterwards. We know that less-intelligent or even non-intelligent things can produce greater intelligence because humans evolved, and evolution is not intelligent.
It’s more a matter of whether we have enough time to drudge something reasonable out of the problem space. If we were smarter we could search it faster.
Evolution is an optimization process. It might not be “intelligent” depending on your definition, but it’s good enough for this. Of course, that just means that a rather powerful optimization process occurred just by chance. The real problem is, as you said, it’s extremely slow. We could probably search it faster, but that doesn’t mean that we can search it fast.