but it just tells us that those problems are less interesting than we thought.
Extrapolating from the trend it would not suprise me greatly if we’d eventually find out that intelligence in general is not as interesting as we thought.
When something is actually understood the problem suffers from rainbow effect “Oh it’s just reflected light from water droplets, how boring and not interesting at all”. It becomes a common thing thus boring for some. I, for one, think go and chess are much more interesting games now that we actually know how they are played, not just how to play.
My point was that go and chess are not actually understood. We don’t actually know how they’re played. There are hacks that allow programs to get good at those games without actually understanding the patterns involved, but recognizing the patterns involved is what humans actually find interesting about the games.
To clarify, “understanding chess” is a interesting problem. It turns out that “writing a program to be very good at chess” isn’t, because it can be solved by brute force in an uninteresting way.
Another example: suppose computer program X and computer program Y are both capable of writing great novels, and human reviewers can’t tell the difference between X’s novels, Y’s novels, and a human’s. However, X uses statistical analysis at the word and sentence level to fill in a hard-coded “novel template,” whereas Y creates characters, simulates their personality and emotions, and simulates interactions between them. Both have solved the (uninteresting) problem of writing great novels, but Y has solved the (interesting) problem of understanding how people write novels.
(ETA: I suspect that program X wouldn’t actually be able to write great novels, and I suspect that writing great novels is therefore actually an interesting problem, but I could be wrong. People used to think that about chess.)
What’s happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don’t let this confuse you into thinking that AI has been successful.
My point was that go and chess are not actually understood. We don’t actually know how they’re played. There are hacks that allow programs to get good at those games without actually understanding the patterns involved, but recognizing the patterns involved is what humans actually find interesting about the games.
That’s not really true. In the last two decades or so, there has been lots of progress in reverse-engineering of how chess masters think and incorporating that knowledge into chess engines. Of course, in some cases such knowledge is basically useless, so it’s not pursued much. For example, there’s no point in teaching computers the heuristics that humans use to recognize immediate tactical combinations where a brute force search would be impossible for humans, but a computer can perform it in a millisecond.
However, when it comes to long-term positional strategy, brute-force search is useless, no matter how fast, and until the mid-1990s, top grandmasters could still reliably beat computers by avoiding tactics and pursuing long-term strategic advantage. That’s not possible any more, since computers actually can think strategically now. (This outcome was disappointing in a sense, since it basically turned out that the human grandmasters’ extraordinary strategic abilities are much more due to recognizing a multitude of patterns learned from experience than flashes of brilliant insight.)
Even the relative importance of brute-force search capabilities has declined greatly. To take one example, the Deep Blue engines that famously matched Kasparov’s ability in 1996 and 1997 relied on specialized hardware that enabled them to evaluate something like 100-200 million positions per second, while a few years later, the Fritz and Junior engines successfully drew against him even though their search capabilities were smaller by two orders of magnitude. In 2006, the world champion Kramnik was soundly defeated by an engine evaluating mere 8 million positions per second, which would have been unthinkable a decade earlier.
Even the relative importance of brute-force search capabilities has declined greatly.
Thanks for updating me; I was indeed thinking of Deep Blue in the mid 90s. Good to know that chess programs are becoming more intelligent and less forceful.
(This outcome was disappointing in a sense, since it basically turned out that the human grandmasters’ extraordinary strategic abilities are much more due to recognizing a multitude of patterns learned from experience than flashes of brilliant insight.)
This is what I would expect; a flash of brilliant insight is what recognizing a pattern feels like from the inside.
but Y has solved the (interesting) problem of understanding how people write novels.
I think the whole point in AI research is to do something, not find out how humans do something. You personally might find psychology (How humans work) far more interesting than AI research (How to do things traditionally classified as ‘intelligence’ regardless of the actual method) but please don’t generalize that notion and smack labels “uninteresting” into problems.
What’s happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don’t let this confuse you into thinking that AI has been successful.
When mysterious things cease to be mysterious they’ll tend to resemble the way “X”.
Consider the advent of powered flight. By that line of argumentation one could write “We don’t actually understand how flight works, There are hacks that allow machines to fly without actually understanding how birds fly.” Or we could compare cars with legs and say that transportation is generally just a ugly uninteresting hack.
I think the whole point in AI research is to do something, not find out how humans do something.
Depends on who’s doing the research and why. You’re right that companies that want to sell software care about solving the problem, which is why that type of approach is so common. On the other hand, I’m reluctant to call a mostly brute-forced solution “AI research”, even if it’s useful computer programming.
When mysterious things cease to be mysterious they’ll tend to resemble the way “X”.
No, I think you’re missing my point. X is uninteresting not because it is no longer mysterious, but because it has no large-scale structure and patterns. We could consider another novel-writing program Z that writes novels in some other interesting and complicated way that’s different than how humans do it, but still has a rich and detailed structure.
Continuing with the flight analogy: rockets, helicopters, planes, and birds all have interesting ways of flying, whereas the “brute force” approach to flight, throwing a rock really really hard, is not that interesting.
Another example: optical character recognition. One approach is to have a database of hundreds of different fonts, put a grid on each character from each font, and come up with a statistical measure that figures out how close the scanned image is to each stored character by looking at the pixels that they have in common. This works and produces useful software, but that approach doesn’t actually care about the different letterforms and shapes involved with them. It doesn’t recognize that structure, even though that’s what the problem is about.
Arguably, OCR is about taking a small patch of an image and matching that to a finite set of candidate possible ground truths. OCR programs can do this sometimes better than most humans, if the only thing you look at is one distorted character.
OCR has traditionally been a difficult problem and there are some novel applications of statistics and heuristics used to solve it. But OCR is not what we actually care about: the problem is recognizing a document, or symbolically representing a sentence, and OCR is just one small problem we’ve carved out to help us deal with the larger problem.
Characters are important when they are part of words, and the structure of a document. They are important when they contribute to what the document means, beyond just the raw data of the image scan. Situating a character in the context of the word it’s in, the sentence that word is in, and the context of the document (English novel, handwritten letter from the 18th century, hastily scribbled medical report from a German hospital in 1970′s) is what allows a human to extrapolate what the character must be, even if the image of the original character is distorted beyond any algorithm’s ability to recognize, or even obliterated entirely.
It’s this effect of context which is hard to capture and encode into an OCR algorithm. This broader sense, of being able to recognize a character anywhere a human would, which is the end goal of the problem, is what my friends refer to as an AI-complete problem. (Apologies if this community also uses that phrase, I haven’t yet seen it here on LW.)
To give a specific example, many doctors use the symbol “circle above a cross” to indicate female, which most people reading would understand. Why? We’ve seen that symbol before, perhaps many times, and understand what it means. If you’ve trained your OCR algorithm on the standard set of English alphanumeric characters, then it will attempt to match that symbol and come up with the wrong answer. If you’ve done unsupervised training of an OCR algorithm on a typical novel, magazine, and newspaper corpus, there is a good chance that the symbol for female does not appear as a cluster in its vector space.
In order to recognize that symbol as a distinct symbol that needs to be somehow represented in the output, an OCR algorithm would have to do unsupervised online learning as it’s scanning documents in a new domain. Even then, I’m not sure how useful it would be, since the problem is not recognizing a given character. The problem is recognizing what that character should be given the context of the document you’re scanning. The problem of OCR explodes into specializations of “OCR for novels, OCR for 18th century English letters, OCR for American hospitals”, and even more.
If we want an OCR algorithm to output something more useful than [funky new character I found], and instead insert “female” into the text database, at some point we have to tell the algorithm about the character. There’s not yet that I know of an OCR system that avoids this hard truth.
I like “AI-complete”, though it wouldn’t surprise me if general symbol recognition and interpretation is easier than natural language, whereas all NP-complete problems are equivalent.
I kept my initial comment technical, without delving into the philosophical aspects of it, but now I can ramble a bit.
I suspect that general symbol recognition and interpretation is AI-complete, because of these issues of context, world knowledge, and quasi-unsupervised online learning.
I believe there is a generalized learning algorithm (or set of algorithms) that use (at minimum) frequencies and in-built biological heuristics that we use to approach the world. In this view, natural language generation and understanding is one manifestation of this more general learning system (or constantly updating pattern recognition, if you like, though I think there may be more to it than simple recognition). Symbol recognition and interpretation is another.
“Recognition” and “interpretation” are themselves slippery words that hide the how and the what of what it is we do when we see a symbol. Computational linguists and psycholinguistics have done a good job of demonstrating that we know very little of what we’re actually doing when we process visual and auditory input.
You are right that AI-complete probably hides finer levels of equivalency classes, wrapped up in the messy issue of what we mean by intelligence. Still, it’s a handy shorthand to refer to problems that may require this more general learning facility, about which we understand very little.
Extrapolating from the trend it would not suprise me greatly if we’d eventually find out that intelligence in general is not as interesting as we thought.
When something is actually understood the problem suffers from rainbow effect “Oh it’s just reflected light from water droplets, how boring and not interesting at all”. It becomes a common thing thus boring for some. I, for one, think go and chess are much more interesting games now that we actually know how they are played, not just how to play.
My point was that go and chess are not actually understood. We don’t actually know how they’re played. There are hacks that allow programs to get good at those games without actually understanding the patterns involved, but recognizing the patterns involved is what humans actually find interesting about the games.
To clarify, “understanding chess” is a interesting problem. It turns out that “writing a program to be very good at chess” isn’t, because it can be solved by brute force in an uninteresting way.
Another example: suppose computer program X and computer program Y are both capable of writing great novels, and human reviewers can’t tell the difference between X’s novels, Y’s novels, and a human’s. However, X uses statistical analysis at the word and sentence level to fill in a hard-coded “novel template,” whereas Y creates characters, simulates their personality and emotions, and simulates interactions between them. Both have solved the (uninteresting) problem of writing great novels, but Y has solved the (interesting) problem of understanding how people write novels.
(ETA: I suspect that program X wouldn’t actually be able to write great novels, and I suspect that writing great novels is therefore actually an interesting problem, but I could be wrong. People used to think that about chess.)
What’s happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don’t let this confuse you into thinking that AI has been successful.
Blueberry:
That’s not really true. In the last two decades or so, there has been lots of progress in reverse-engineering of how chess masters think and incorporating that knowledge into chess engines. Of course, in some cases such knowledge is basically useless, so it’s not pursued much. For example, there’s no point in teaching computers the heuristics that humans use to recognize immediate tactical combinations where a brute force search would be impossible for humans, but a computer can perform it in a millisecond.
However, when it comes to long-term positional strategy, brute-force search is useless, no matter how fast, and until the mid-1990s, top grandmasters could still reliably beat computers by avoiding tactics and pursuing long-term strategic advantage. That’s not possible any more, since computers actually can think strategically now. (This outcome was disappointing in a sense, since it basically turned out that the human grandmasters’ extraordinary strategic abilities are much more due to recognizing a multitude of patterns learned from experience than flashes of brilliant insight.)
Even the relative importance of brute-force search capabilities has declined greatly. To take one example, the Deep Blue engines that famously matched Kasparov’s ability in 1996 and 1997 relied on specialized hardware that enabled them to evaluate something like 100-200 million positions per second, while a few years later, the Fritz and Junior engines successfully drew against him even though their search capabilities were smaller by two orders of magnitude. In 2006, the world champion Kramnik was soundly defeated by an engine evaluating mere 8 million positions per second, which would have been unthinkable a decade earlier.
Thanks for updating me; I was indeed thinking of Deep Blue in the mid 90s. Good to know that chess programs are becoming more intelligent and less forceful.
This is what I would expect; a flash of brilliant insight is what recognizing a pattern feels like from the inside.
I think the whole point in AI research is to do something, not find out how humans do something. You personally might find psychology (How humans work) far more interesting than AI research (How to do things traditionally classified as ‘intelligence’ regardless of the actual method) but please don’t generalize that notion and smack labels “uninteresting” into problems.
When mysterious things cease to be mysterious they’ll tend to resemble the way “X”.
Consider the advent of powered flight. By that line of argumentation one could write “We don’t actually understand how flight works, There are hacks that allow machines to fly without actually understanding how birds fly.” Or we could compare cars with legs and say that transportation is generally just a ugly uninteresting hack.
Depends on who’s doing the research and why. You’re right that companies that want to sell software care about solving the problem, which is why that type of approach is so common. On the other hand, I’m reluctant to call a mostly brute-forced solution “AI research”, even if it’s useful computer programming.
No, I think you’re missing my point. X is uninteresting not because it is no longer mysterious, but because it has no large-scale structure and patterns. We could consider another novel-writing program Z that writes novels in some other interesting and complicated way that’s different than how humans do it, but still has a rich and detailed structure.
Continuing with the flight analogy: rockets, helicopters, planes, and birds all have interesting ways of flying, whereas the “brute force” approach to flight, throwing a rock really really hard, is not that interesting.
Another example: optical character recognition. One approach is to have a database of hundreds of different fonts, put a grid on each character from each font, and come up with a statistical measure that figures out how close the scanned image is to each stored character by looking at the pixels that they have in common. This works and produces useful software, but that approach doesn’t actually care about the different letterforms and shapes involved with them. It doesn’t recognize that structure, even though that’s what the problem is about.
Arguably, OCR is about taking a small patch of an image and matching that to a finite set of candidate possible ground truths. OCR programs can do this sometimes better than most humans, if the only thing you look at is one distorted character.
OCR has traditionally been a difficult problem and there are some novel applications of statistics and heuristics used to solve it. But OCR is not what we actually care about: the problem is recognizing a document, or symbolically representing a sentence, and OCR is just one small problem we’ve carved out to help us deal with the larger problem.
Characters are important when they are part of words, and the structure of a document. They are important when they contribute to what the document means, beyond just the raw data of the image scan. Situating a character in the context of the word it’s in, the sentence that word is in, and the context of the document (English novel, handwritten letter from the 18th century, hastily scribbled medical report from a German hospital in 1970′s) is what allows a human to extrapolate what the character must be, even if the image of the original character is distorted beyond any algorithm’s ability to recognize, or even obliterated entirely.
It’s this effect of context which is hard to capture and encode into an OCR algorithm. This broader sense, of being able to recognize a character anywhere a human would, which is the end goal of the problem, is what my friends refer to as an AI-complete problem. (Apologies if this community also uses that phrase, I haven’t yet seen it here on LW.)
To give a specific example, many doctors use the symbol “circle above a cross” to indicate female, which most people reading would understand. Why? We’ve seen that symbol before, perhaps many times, and understand what it means. If you’ve trained your OCR algorithm on the standard set of English alphanumeric characters, then it will attempt to match that symbol and come up with the wrong answer. If you’ve done unsupervised training of an OCR algorithm on a typical novel, magazine, and newspaper corpus, there is a good chance that the symbol for female does not appear as a cluster in its vector space.
In order to recognize that symbol as a distinct symbol that needs to be somehow represented in the output, an OCR algorithm would have to do unsupervised online learning as it’s scanning documents in a new domain. Even then, I’m not sure how useful it would be, since the problem is not recognizing a given character. The problem is recognizing what that character should be given the context of the document you’re scanning. The problem of OCR explodes into specializations of “OCR for novels, OCR for 18th century English letters, OCR for American hospitals”, and even more.
If we want an OCR algorithm to output something more useful than [funky new character I found], and instead insert “female” into the text database, at some point we have to tell the algorithm about the character. There’s not yet that I know of an OCR system that avoids this hard truth.
I like “AI-complete”, though it wouldn’t surprise me if general symbol recognition and interpretation is easier than natural language, whereas all NP-complete problems are equivalent.
I kept my initial comment technical, without delving into the philosophical aspects of it, but now I can ramble a bit.
I suspect that general symbol recognition and interpretation is AI-complete, because of these issues of context, world knowledge, and quasi-unsupervised online learning.
I believe there is a generalized learning algorithm (or set of algorithms) that use (at minimum) frequencies and in-built biological heuristics that we use to approach the world. In this view, natural language generation and understanding is one manifestation of this more general learning system (or constantly updating pattern recognition, if you like, though I think there may be more to it than simple recognition). Symbol recognition and interpretation is another.
“Recognition” and “interpretation” are themselves slippery words that hide the how and the what of what it is we do when we see a symbol. Computational linguists and psycholinguistics have done a good job of demonstrating that we know very little of what we’re actually doing when we process visual and auditory input.
You are right that AI-complete probably hides finer levels of equivalency classes, wrapped up in the messy issue of what we mean by intelligence. Still, it’s a handy shorthand to refer to problems that may require this more general learning facility, about which we understand very little.