I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking
Not at all. Brains are complicated, not magic. But complicated is bad enough.
Would you consider the output of a regression a black box?
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
What’s your machine learning background like, by the way?
One semester graduate course a few years ago.
It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
Brains are complicated, not magic. But complicated is bad enough.
So would you consider this blog post in accordance with your position?
I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can’t make technological advances, so maybe there exists some advance A such that it’s too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)
I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?
Hofstadter, in Godel, Escher, Bach?
Not at all. Brains are complicated, not magic. But complicated is bad enough.
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
One semester graduate course a few years ago.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
So would you consider this blog post in accordance with your position?
I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can’t make technological advances, so maybe there exists some advance A such that it’s too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)
Here’s a blog post with some AI progress estimates: http://www.overcomingbias.com/2012/08/ai-progress-estimate.html
What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?
Previous discussion: http://lesswrong.com/lw/hp5/after_critical_event_w_happens_they_still_wont/95ow
I see. He got so focused on the power of strange loops that he forgot that you can do a whole lot without them.
I don’t have a copy handy. I distinctly remember this claim, though. This purports to be a quote from near the end of the book.
4 “Will there be chess programs that can beat anyone?” “No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players.” (http://www.psychologytoday.com/blog/the-decision-tree/201111/how-much-progress-has-artificial-intelligence-made)