Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue’s evaluation for a specific position is more “intelligent”, but it’s just hard-coded by the programmers. Deep Blue didn’t think of it.
I can’t remember right off hand, but there’s some AI researcher (maybe Marvin Minsky?) who pointed out that people use the word “intelligence” to describe whatever humans can do for which the underlying algorithms are not understood. So as we discover more and more algorithms for doing intelligent stuff, the goalposts for what constitutes “intelligence” keep getting moved. I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved. Why was this intellectual surprised? Because he didn’t realize that there were discoverable, implementable algorithms that could be used to complete the action of playing chess. And in the same way, there exist algorithms for doing all the other thinking that people do (including inventing algorithms)… we just haven’t discovered and refined them the way we’ve discovered and refined chess-playing algorithms.
(Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking? Or you hold some other variation of the “brains are magic” position? Speaking of magic, that’s how ancient people thought about lightning and other phenomena that are well-understood today… given that human brains are probably the most complicated natural thing we know about, it’s not surprising that they’d be one of the last natural things for us to understand.)
The output of a machine-learning algorithm is basically a black box.
Hm, that doesn’t sound like an accurate description of all machine learning techniques. Would you consider the output of a regression a black box? I don’t think I would. What’s your machine learning background like, by the way?
Anyway, even if it’s a black box, I’d say it constitutes appreciable progress. It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient “intelligence algorithm”, or “understanding how intelligence works”.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence (e.g. see this interview series). Are you an expert in AI? If not, you are talking with an awful lot of certainty for a layman.
I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking
Not at all. Brains are complicated, not magic. But complicated is bad enough.
Would you consider the output of a regression a black box?
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
What’s your machine learning background like, by the way?
One semester graduate course a few years ago.
It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
Brains are complicated, not magic. But complicated is bad enough.
So would you consider this blog post in accordance with your position?
I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can’t make technological advances, so maybe there exists some advance A such that it’s too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)
I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?
I can’t remember right off hand, but there’s some AI researcher (maybe Marvin Minsky?) who pointed out that people use the word “intelligence” to describe whatever humans can do for which the underlying algorithms are not understood. So as we discover more and more algorithms for doing intelligent stuff, the goalposts for what constitutes “intelligence” keep getting moved. I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved. Why was this intellectual surprised? Because he didn’t realize that there were discoverable, implementable algorithms that could be used to complete the action of playing chess. And in the same way, there exist algorithms for doing all the other thinking that people do (including inventing algorithms)… we just haven’t discovered and refined them the way we’ve discovered and refined chess-playing algorithms.
(Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking? Or you hold some other variation of the “brains are magic” position? Speaking of magic, that’s how ancient people thought about lightning and other phenomena that are well-understood today… given that human brains are probably the most complicated natural thing we know about, it’s not surprising that they’d be one of the last natural things for us to understand.)
Hm, that doesn’t sound like an accurate description of all machine learning techniques. Would you consider the output of a regression a black box? I don’t think I would. What’s your machine learning background like, by the way?
Anyway, even if it’s a black box, I’d say it constitutes appreciable progress. It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence (e.g. see this interview series). Are you an expert in AI? If not, you are talking with an awful lot of certainty for a layman.
Hofstadter, in Godel, Escher, Bach?
Not at all. Brains are complicated, not magic. But complicated is bad enough.
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
One semester graduate course a few years ago.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
So would you consider this blog post in accordance with your position?
I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can’t make technological advances, so maybe there exists some advance A such that it’s too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)
Here’s a blog post with some AI progress estimates: http://www.overcomingbias.com/2012/08/ai-progress-estimate.html
What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?
Previous discussion: http://lesswrong.com/lw/hp5/after_critical_event_w_happens_they_still_wont/95ow
I see. He got so focused on the power of strange loops that he forgot that you can do a whole lot without them.
I don’t have a copy handy. I distinctly remember this claim, though. This purports to be a quote from near the end of the book.
4 “Will there be chess programs that can beat anyone?” “No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players.” (http://www.psychologytoday.com/blog/the-decision-tree/201111/how-much-progress-has-artificial-intelligence-made)