It can’t literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn’t happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do—input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by “natural language” you mean something like “really good Zork-style interactive fiction parser”, that might be a bit less difficult than making a computer that can pass a high school English course. And I’m really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn’t be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and “answers” are, then the computer should have to use a video camera and microphone, too.
And I’m really boggled that a computer can play Jeopardy! successfully.
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
Yeah, it’s interesting the trick they used—they basically used translated books, rather than dictionaries, as their reference… that, and a whole lot of computing power.
If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don’t have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don’t.
You know, I actually don’t know!
It can’t literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn’t happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do—input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by “natural language” you mean something like “really good Zork-style interactive fiction parser”, that might be a bit less difficult than making a computer that can pass a high school English course. And I’m really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn’t be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and “answers” are, then the computer should have to use a video camera and microphone, too.
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
Yeah, it’s interesting the trick they used—they basically used translated books, rather than dictionaries, as their reference… that, and a whole lot of computing power.
If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don’t have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don’t.
Is Google Translate a somewhat imperfect Chinese Room?
Also, is Google Translate getting better?