I’m 59. It didn’t seem to me as though things changed very much until the 90′s. Microwaves and transistor radios are very nice, but not the same sort of qualitative jump as getting on line.
And now we’re in a era where it’s routine to learn about extrasolar planets—admittedly not as practical as access to the web, but still amazing.
I’m not sure whether we’re careening towards a singularity, though I admit that self-driving cars are showing up much earlier than I expected.
Did anyone else expect that self-driving cars would be so much easier than natural language?
Did anyone else expect that self-driving cars would be so much easier than natural language?
I was very surprised. I had been using Google Translate and before that Babel Fish for years, and expected them to slowly incrementally improve as they kept on doing; self-driving cars, on the other hand, had essentially no visible improvement to me in the 1990s and the 2000s essentially up to the second DARPA challenge where (to me) they did the proverbial ‘0 to 60’.
Did anyone else expect that self-driving cars would be so much easier than natural language?
Not I—they seem like different kinds of messy. Self-driving cars have to deal with the messy, unpredictable natural world, but within a fairly narrow set of constraints. Many very simple organisms can find their way along a course while avoiding obstacles and harm; driving obviously isn’t trivial to automate, but it just seems orders of magnitude easier than automating a system that can effectively interface with the behavior-and-communication protocols of eusocial apes, as it were.
Did anyone else expect that self-driving cars would be so much easier than natural language?
I have always expected computers that were as able to navigate a car to a typical real-world destination as an average human driver to be easier to build than computers that were as able to manage a typical real-world conversation as an average human native speaker.
That said, there’s a huge range of goalposts in the realm of “natural language”, some of which I expected to be a lot easier than they seem to be.
It didn’t seem to me as though things changed very much until the 90′s.
I had access to a basic-programmable shared time teletype ’73 & ’74, dial-up and a local IBM (we loaded cards, got printo of results) ‘74-‘78 @ swarthmore college, programmed in Fortran for Radioastronomers ‘78-’80 and so on… I always took computers for granted and assumed through that entire time period that it was “too late” to get in on the ground floor because everybody already knew.
I never realized before now how lucky I was, how little excuse I have for not being rich.
Did anyone else expect that self-driving cars would be so much easier than natural language?
If by “expect” you mean BEFORE I knew the result? :) It is very hard to make predictions, ESPECIALLY about the future. Now I didn’t anticipate this would happen, but as it happens it seems very sensible.
Stuff we were particularly evolved to do is more complex than stuff we use our neocortex for, stuff we were not particularly evolved to do. I think we systematically underestimate how hard language is because we have all sorts of eolutionarily provided “black boxes” to help us along that we seem blind too until we try to duplicate the function outside our heads. Driving, on the other hand, we are not particularly well evolved to do, so we have had to make it so simple that even a neocortex can do it. Probably the hardest part of automated driving is bringing the situational awareness into the machine driving the car: interpreting camera images to tell what a stoplight is doing, where the other cars are and how they are moving, and so on, which all recapitulate things we are well evolved to do.
But no, automated driving before relatively natural language interfaces was a shocking result to me as well.
And I can’t WAIT to get one of those cars. Although my daughter getting her learner’s permit in half a year is almost as good ( what do I care whether google drives me around or Julia does?)
It can’t literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn’t happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do—input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by “natural language” you mean something like “really good Zork-style interactive fiction parser”, that might be a bit less difficult than making a computer that can pass a high school English course. And I’m really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn’t be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and “answers” are, then the computer should have to use a video camera and microphone, too.
And I’m really boggled that a computer can play Jeopardy! successfully.
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
Yeah, it’s interesting the trick they used—they basically used translated books, rather than dictionaries, as their reference… that, and a whole lot of computing power.
If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don’t have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don’t.
I’m 59. It didn’t seem to me as though things changed very much until the 90′s. Microwaves and transistor radios are very nice, but not the same sort of qualitative jump as getting on line.
And now we’re in a era where it’s routine to learn about extrasolar planets—admittedly not as practical as access to the web, but still amazing.
I’m not sure whether we’re careening towards a singularity, though I admit that self-driving cars are showing up much earlier than I expected.
Did anyone else expect that self-driving cars would be so much easier than natural language?
I was very surprised. I had been using Google Translate and before that Babel Fish for years, and expected them to slowly incrementally improve as they kept on doing; self-driving cars, on the other hand, had essentially no visible improvement to me in the 1990s and the 2000s essentially up to the second DARPA challenge where (to me) they did the proverbial ‘0 to 60’.
Not I—they seem like different kinds of messy. Self-driving cars have to deal with the messy, unpredictable natural world, but within a fairly narrow set of constraints. Many very simple organisms can find their way along a course while avoiding obstacles and harm; driving obviously isn’t trivial to automate, but it just seems orders of magnitude easier than automating a system that can effectively interface with the behavior-and-communication protocols of eusocial apes, as it were.
I have always expected computers that were as able to navigate a car to a typical real-world destination as an average human driver to be easier to build than computers that were as able to manage a typical real-world conversation as an average human native speaker.
That said, there’s a huge range of goalposts in the realm of “natural language”, some of which I expected to be a lot easier than they seem to be.
I had access to a basic-programmable shared time teletype ’73 & ’74, dial-up and a local IBM (we loaded cards, got printo of results) ‘74-‘78 @ swarthmore college, programmed in Fortran for Radioastronomers ‘78-’80 and so on… I always took computers for granted and assumed through that entire time period that it was “too late” to get in on the ground floor because everybody already knew.
I never realized before now how lucky I was, how little excuse I have for not being rich.
If by “expect” you mean BEFORE I knew the result? :) It is very hard to make predictions, ESPECIALLY about the future. Now I didn’t anticipate this would happen, but as it happens it seems very sensible.
Stuff we were particularly evolved to do is more complex than stuff we use our neocortex for, stuff we were not particularly evolved to do. I think we systematically underestimate how hard language is because we have all sorts of eolutionarily provided “black boxes” to help us along that we seem blind too until we try to duplicate the function outside our heads. Driving, on the other hand, we are not particularly well evolved to do, so we have had to make it so simple that even a neocortex can do it. Probably the hardest part of automated driving is bringing the situational awareness into the machine driving the car: interpreting camera images to tell what a stoplight is doing, where the other cars are and how they are moving, and so on, which all recapitulate things we are well evolved to do.
But no, automated driving before relatively natural language interfaces was a shocking result to me as well.
And I can’t WAIT to get one of those cars. Although my daughter getting her learner’s permit in half a year is almost as good ( what do I care whether google drives me around or Julia does?)
In an amazing coincidence, soon after seeing your commentI came across a Hacker News link that included this quote:
I’m not surprised that it’s easier, but I also didn’t expect to see self-driving cars that worked.
Does this imply that you expected natural language to be impossible?
You know, I actually don’t know!
It can’t literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn’t happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do—input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by “natural language” you mean something like “really good Zork-style interactive fiction parser”, that might be a bit less difficult than making a computer that can pass a high school English course. And I’m really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn’t be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and “answers” are, then the computer should have to use a video camera and microphone, too.
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
Yeah, it’s interesting the trick they used—they basically used translated books, rather than dictionaries, as their reference… that, and a whole lot of computing power.
If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don’t have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don’t.
Is Google Translate a somewhat imperfect Chinese Room?
Also, is Google Translate getting better?