I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it’s awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. And I thought it was funny when he said that condoms were one of the most important new technologies, but aren’t talked about much, as if to imply that condoms aren’t cool. I think that condoms are cool! I remember when I first got condoms, and took one out to play with. After testing it a couple different ways, I thought: *how does anyone manage to break one of these!?” It’s easy to extrapolate that no “cool” technology will exist in the future, if you don’t acknowledge that any cool technology currently exists.
But I think Taleb’s piece is valuable, because it illustrates what we are up against, as people trying to get others to take seriously the risks, and opportunities, presented by future technologies. Taleb seems very serious and respectable, precisely because he is so curmudgeonly and conservative, whereas we seem excitable and silly. And he’s right that singularitarian types tend to overemphasize changes relative to everything that remains the same, and often conflate their predictions of the future with their desires for the future. I think that lesswrong is better than most in this regard, with spokespeople for SI taking care to point out that their singularity hypothesis does not predict accelerating change, and that the consequences of disruptive technology need not be good. Still, I wonder if there’s any way to present a more respectable face to the public, without pretending that we don’t believe what we do.
You might get a different perspective on the present when you reach your 50′s, as I have. I used Amazon’s book-previewing service to read parts of W. Patrick McCray’s book, The Visioneers, and I realized that I could nearly have written that book myself because my life has intersected with the story he tells at several points. McCray focuses on Gerard K. O’Neill and Eric Drexler, and in my Amazon review I pointed out that after a generation, or nearly two in O’Neill’s case, we can get the impression that their respective ideas don’t work. No one has gotten any closer to becoming a space colonist since the 1970′s, and we haven’t seen the nanomachines Drexler promised us in the 1980′s which can produce abundance and make us “immortal.”
So I suspect you youngsters will probably have a similar letdown waiting for you when you reach your 40′s and 50′s, and realize that you’ll wind up aging and dying like everyone else without having any technological miracles to rescue you.
A lot of young people, including me, seem to be getting a lot of “man, we’re really living in the future” kind of emotional reactions relatively frequently. E.g. I remember that as a kid, I imagined having a Star Trek-style combined communicator and tricorder so that if someone wanted to know where I was, I could snap them a picture of my location and send it to them instantly. To me, that felt cool and science fictiony. Today, not only can even the cheapest cell phone do that, but many phones can be set up to constantly share their location to all of one’s friends.
Or back in the era of modems and dial-up Internet, the notion of having several gigabytes of e-mail storage, wireless broadband Internet, or a website hosting and streaming the videos of anyone who wanted to upload them all felt obviously unrealistic and impossible. Today everyone takes the existence of those for granted. And with GoogleGlass, I expect augmented reality to finally become commonplace and insert itself into our daily lives just as quickly as smartphones and YouTube did.
And since we’re talking about Google, self-driving cars!
So the point of this comment is that I’m having a hard time imagining my 40′s and 50′s being a letdown in terms of technological change, given that by my mid-20′s I’ve already experienced more future shocks than I would ever have expected to. And that makes me curious about whether you experienced a similar amount of future shocks when you were my age?
I’m 55 and I think the present is more shocking now than it was in the 1970s and 1980s. For me, the 70s and 80s were about presaging modern times. I think the first time I could look up the card catalog at my local library, ~1986 on gopher, I began to believe viscerally that all this internet stuff and computers was going to seriously matter SOON. Within a few months of that I saw my first webpage and that literally (by which of course I mean figuratively) knocked me in to the next century. I was flabbergasted.
Part of what was so shocking about being shocked was that it was, in some wierd sense, exactly what I expected. I had played with hypercard on macs years earlier and the early web was just essentially a networked extension of that. In my science fiction youth, I had always known or believed that knowledge would be ubiquitously available. I could summarize as saying there were no electronics in Star Trek (the original) that seemed unreasonable, from talking computers, big displays, tricorders and communicators. To me, faster-than-light travel, intelligent species all over the universe that looked and acted like made-up humans, and the transporter all seemed unreasonable.
Maybe what was shocking about the webpage is that it was so GORGEOUS. I saw it on a biggish Sun workstation screen. Text was crisp and proportionally spaced black type on white background. Pictures were colorful and vibrant. hyper-text links worked really fast. The impact of actually seeing it was overwhelming compared to just believing that someday it would be there.
As a 55 year old it feels to me like we are careening towards a singularity. The depth of processing power and sensor varieties that can be used in smartphones has barely begun to be explored. Meanwhile, these continue to get more powerful, more beautiful, and with more sensors available. Google autodrive cars: of course all those ideas about building guidewires and special things into the roads is dopey. At least its dopey if you don’t have to, and google shows you don’t.
Years ago when looking at biotech I commented wonderingly to my equal-aged frriend: isn’t it amazing to think that we could be among the last generation to die. My only consolation in knowing I will probably not make it until the singularity is that the way these things go, it will probably be delayed until 2091 anyway so I won’t just miss it by a little. And meanwhile, it doesn’t take much to enjoy the sled ride down the event horizon as the proto-singularity continues to wind itself up.
I’m 59. It didn’t seem to me as though things changed very much until the 90′s. Microwaves and transistor radios are very nice, but not the same sort of qualitative jump as getting on line.
And now we’re in a era where it’s routine to learn about extrasolar planets—admittedly not as practical as access to the web, but still amazing.
I’m not sure whether we’re careening towards a singularity, though I admit that self-driving cars are showing up much earlier than I expected.
Did anyone else expect that self-driving cars would be so much easier than natural language?
Did anyone else expect that self-driving cars would be so much easier than natural language?
I was very surprised. I had been using Google Translate and before that Babel Fish for years, and expected them to slowly incrementally improve as they kept on doing; self-driving cars, on the other hand, had essentially no visible improvement to me in the 1990s and the 2000s essentially up to the second DARPA challenge where (to me) they did the proverbial ‘0 to 60’.
Did anyone else expect that self-driving cars would be so much easier than natural language?
Not I—they seem like different kinds of messy. Self-driving cars have to deal with the messy, unpredictable natural world, but within a fairly narrow set of constraints. Many very simple organisms can find their way along a course while avoiding obstacles and harm; driving obviously isn’t trivial to automate, but it just seems orders of magnitude easier than automating a system that can effectively interface with the behavior-and-communication protocols of eusocial apes, as it were.
Did anyone else expect that self-driving cars would be so much easier than natural language?
I have always expected computers that were as able to navigate a car to a typical real-world destination as an average human driver to be easier to build than computers that were as able to manage a typical real-world conversation as an average human native speaker.
That said, there’s a huge range of goalposts in the realm of “natural language”, some of which I expected to be a lot easier than they seem to be.
It didn’t seem to me as though things changed very much until the 90′s.
I had access to a basic-programmable shared time teletype ’73 & ’74, dial-up and a local IBM (we loaded cards, got printo of results) ‘74-‘78 @ swarthmore college, programmed in Fortran for Radioastronomers ‘78-’80 and so on… I always took computers for granted and assumed through that entire time period that it was “too late” to get in on the ground floor because everybody already knew.
I never realized before now how lucky I was, how little excuse I have for not being rich.
Did anyone else expect that self-driving cars would be so much easier than natural language?
If by “expect” you mean BEFORE I knew the result? :) It is very hard to make predictions, ESPECIALLY about the future. Now I didn’t anticipate this would happen, but as it happens it seems very sensible.
Stuff we were particularly evolved to do is more complex than stuff we use our neocortex for, stuff we were not particularly evolved to do. I think we systematically underestimate how hard language is because we have all sorts of eolutionarily provided “black boxes” to help us along that we seem blind too until we try to duplicate the function outside our heads. Driving, on the other hand, we are not particularly well evolved to do, so we have had to make it so simple that even a neocortex can do it. Probably the hardest part of automated driving is bringing the situational awareness into the machine driving the car: interpreting camera images to tell what a stoplight is doing, where the other cars are and how they are moving, and so on, which all recapitulate things we are well evolved to do.
But no, automated driving before relatively natural language interfaces was a shocking result to me as well.
And I can’t WAIT to get one of those cars. Although my daughter getting her learner’s permit in half a year is almost as good ( what do I care whether google drives me around or Julia does?)
It can’t literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn’t happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do—input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by “natural language” you mean something like “really good Zork-style interactive fiction parser”, that might be a bit less difficult than making a computer that can pass a high school English course. And I’m really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn’t be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and “answers” are, then the computer should have to use a video camera and microphone, too.
And I’m really boggled that a computer can play Jeopardy! successfully.
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
Yeah, it’s interesting the trick they used—they basically used translated books, rather than dictionaries, as their reference… that, and a whole lot of computing power.
If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don’t have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don’t.
OK, I’m a bit younger than you, though I still remember having to use a slide rule in school. And I agree, it’s been an exciting ride.
Part of what was so shocking about being shocked was that it was, in some weird sense, exactly what I expected.
Not my impression at all. To me the ride appears full of wild surprises around every turn. In retrospect, while I did foresee one or two things that came to pass, others were totally unexpected. That’s one reason I keep pointing out on this forum that failure of imagination is one of the most pervasive and least acknowledged cognitive fallacies. There are many more black swans than we expect.
In that sense, we are living through the event horizon already. As a person trained in General Relativity, I dislike misusing this term, but there is a decent comparison here: when free-falling and crossing the event horizon of a black hole one does not notice anything special at all, it’s business as usual. There is no visible “no going back” moment at all.
In that vein, I expect the surprises, both good and bad, to continue at about the same pace for some time. I am guessing that the worst problems will be those no one thinks about now, except maybe in a sci-fi story or two, or on some obscure blog. Same with x-risk. It will not be Skynet, or nanobots, bioweapons, asteroids, but something
totally out of the left field. Similarly, the biggest progress in life extension will not be due to cryo or WBE, but some other tech. Or maybe there won’t be any at all for another century.
I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don’t sound either head-in-the-sand denial of the technological change or crazy.
I wonder how you could figure out just how atypical things are now. Different than most of history, sure, most people lived in a world where you expected life parameters to be the same for your grandparents’ and grandchildren’s generations, and we definitely don’t have that now. But we haven’t had that in the first world for the last 150 years. Telegraphs, steam engines and mass manufacture were new things that caused massive societal change. Computers, nuclear power, space rockets, and figuring out that space and time are stretchy and living cells are just chemical machines were stuff that were more likely to make onlookers go “wait, that’s not supposed to happen!” than “oh, clever”.
People during the space age definitely thought they were living in the future, and contemporary stuff is still a bit tinged by how their vast projections failed to materialize on schedule. Did more people in 1965 imagine they were living in the future than people in 1975? What about people doing computer science in 1985, compared to 2005?
The space program enthusiasts mostly did end up very disappointed in their 50s, as did the people who were trying to get personal computing going using unified Lisp or SmallTalk environments that were supposed to empower users with the ability to actually program the system as a routine matter.
Following the pattern, you’d expect to get a bunch of let down aging singularitarians in the 2030s, when proper machine intelligence is still getting caught up with various implementation dead ends and can’t get funding, while young people are convinced that spime-interfaced DNA resequencing implants are going to be the future thing that will change absolutely everything, you just wait, and the furry subculture is a lot more disturbing than it used to be.
So I don’t know which it is. There seems to be more stuff from the future in peoples’ everyday lives now, but stuff from the future has been around for over a century now, so it’s not instantly obvious that things should be particularly different right now.
It may seem to have been a golden age of promise now lost, but I was there, and that isn’t how it seems to me.
As examples of computer science in 1985, the linked blog post cites the Lisp machine and ALICE. The Lisp machine was built. It was sold. There are no Lisp machines now, except maybe in museums or languishing as mementos. ALICE (not notable enough to get a Wikipedia article) never went beyond a hardware demo. (I knew Mike Reeve and John Darlington back then, and knew about ALICE, although I wasn’t involved with it. One of my current colleagues was, and still has an old ALICE circuit board in his office. I was involved with another alternative architecture, of which, at this remove, the less said the better.)
What killed them? Moore’s Law, and this was an observation that was made even back then. There was no point in designing special purpose hardware for better performance, because general purpose hardware would have doubled its speed before long and it would outperform you before you could ever get into production. Turning up the clock made everything faster, while specialised hardware only made a few things faster.
Processors stopped getting faster in 2004 (when Intel bottled out of making 4GHz CPUs). The result? Special-purpose hardware primarily driven not by academic research but by engineers trying to make stuff that did more within that limit: GPUs for games and server farms for the web. Another damp squid of the 1980s, the Transputer, can be seen as ancestral to those developments, but I suspect that if the Transputer had never been invented, the development of GPUs would be unaffected.
When it appears, as the blog post says, “that all you must do to turn a field upside-down is to dig out a few decades-old papers and implement the contents”, well, maybe a geek encountering the past is like a physicist encountering a new subject. OTOH, he is actually trying to do something, so props to him, and I hope he succeeds at what could not be done back then.
Thinking a bit more of this, I think the basic pattern I’m matching here is that each era there’s some grand technocratic narrative where an overarching first-principles design from the current Impressive Technology, industrial production, rocket engines, internetworked computers, or artificial intelligence, will produce a clean and ordered new world order. This won’t happen, and instead something a lot more organic, diffuse, confusing, low-key and wildly unexpected will show up.
On the other hand, we don’t currently seem to be having the sort of unified present-day tech paradigm like there was during the space age. My guess for the next big tech paradigm thing would be radical biotechnology and biotech-based cognitive engineering, but we don’t really have either of those yet. Instead, we’ve got Planetary Resources and Elon Musk doing the stuff of the space age folk, Bitcoin and whatnot that’s something like the 90s cypherpunks thought up, IBM Watson and Google cars are something that AI was supposed to deliver in the 80s before the AI Winter set in, and we might be seeing a bit of a return to 80s style diverse playing field in computing with stuff like Raspberry PI, 3D printing and everybody being able to put their apps online and for sale without paying for brick & mortar shelf space.
So it’s kinda like all the stuff that was supposed to happen any time now at various points of the late 20th century was starting to happen at once. But that could be just the present looking like it has a lot more stuff than the past, since I’m seeing a lot less of the past than the present.
I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don’t sound either head-in-the-sand denial of the technological change or crazy.
You know, that’s a good description of my reaction to reading Brin’s Existence the other day. I think 10 years is not that revolutionary, but at 50+ years, the dichotomy is getting pretty bad.
A lot of young people, including me, seem to be getting a lot of “man, we’re really living in the future” kind of emotional reactions relatively frequently.
I’m 33, and same here.
I like to point out the difference between the time I think of something cool and the time it is invented. In general, that time has been usually negative for a number of years now. As a trivial silly example, after hearing the Gangnam Style song, I said “I want to see the parody video called ‘Gungan style’ about Star Wars.” (I just assumed it exists already). While there were indeed several such videos, the top result was instead a funnier video making fun of the concept of making such a parody video.
No one has gotten any closer to becoming a space colonist since the 1970′s, and we haven’t seen the nanomachines Drexler promised us in the 1980′s which can produce abundance and make us “immortal.”
On the other hand, we do have nanomachines, which can do a number of interesting things, and we didn’t have them a couple decades ago. We’re making much more tangible progress towards versatile nanotechnology than we are towards space colonization.
Yeah, that was my impression. One of the things that’s interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it’s hard to tell when he’s criticizing futurists for expecting certain new technologies, and when he’s criticizing them for wanting those new technologies. When he says that he’s going to take a cab driven by an immigrant, is he saying that robot cars won’t arrive any time soon? Or that it wouldn’t make a difference if they did? Or that it would be bad if they did? I think his point is a bit muddled.
One thing he gets right is that cool new technologies need not be revolutionary. Don’t get me wrong; I take the possibility of truly transformative tech seriously, but futurists do overestimate technology for a simple reason. When imagining what life will be like with a given gadget, you focus on those parts of your life when you could use the gadget, and thus overestimate the positive effect of the gadget (This is also why people’s kitchens get cluttered over time). For myself, I think that robot cars will be commonplace in ten years, and that will be friggin’ awesome. But it won’t transform our lives—it will be an incremental change. The flip side is that Taleb may underestimate the cumulative effect of many incremental changes.
So I suspect you youngsters will probably have a similar letdown waiting for you when you reach your 40′s and 50′s, and realize that you’ll wind up aging and dying like everyone else without having any technological miracles to rescue you.
So why don’t we see an inverse Maes-Garreau effect, where predictors upon hitting their 40-50s are suddenly letdown and disenchanted and start making predictions for centuries out, rather than scores of years?
And what would you predict for the LW survey results? All 3 surveys ask for the age of the respondent, so there’s plenty of data to correlate against, and we should be able to see any discouragement in the 40-50syo respondents.
I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it’s awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. And I thought it was funny when he said that condoms were one of the most important new technologies, but aren’t talked about much, as if to imply that condoms aren’t cool. I think that condoms are cool! I remember when I first got condoms, and took one out to play with. After testing it a couple different ways, I thought: *how does anyone manage to break one of these!?” It’s easy to extrapolate that no “cool” technology will exist in the future, if you don’t acknowledge that any cool technology currently exists.
But I think Taleb’s piece is valuable, because it illustrates what we are up against, as people trying to get others to take seriously the risks, and opportunities, presented by future technologies. Taleb seems very serious and respectable, precisely because he is so curmudgeonly and conservative, whereas we seem excitable and silly. And he’s right that singularitarian types tend to overemphasize changes relative to everything that remains the same, and often conflate their predictions of the future with their desires for the future. I think that lesswrong is better than most in this regard, with spokespeople for SI taking care to point out that their singularity hypothesis does not predict accelerating change, and that the consequences of disruptive technology need not be good. Still, I wonder if there’s any way to present a more respectable face to the public, without pretending that we don’t believe what we do.
You might get a different perspective on the present when you reach your 50′s, as I have. I used Amazon’s book-previewing service to read parts of W. Patrick McCray’s book, The Visioneers, and I realized that I could nearly have written that book myself because my life has intersected with the story he tells at several points. McCray focuses on Gerard K. O’Neill and Eric Drexler, and in my Amazon review I pointed out that after a generation, or nearly two in O’Neill’s case, we can get the impression that their respective ideas don’t work. No one has gotten any closer to becoming a space colonist since the 1970′s, and we haven’t seen the nanomachines Drexler promised us in the 1980′s which can produce abundance and make us “immortal.”
So I suspect you youngsters will probably have a similar letdown waiting for you when you reach your 40′s and 50′s, and realize that you’ll wind up aging and dying like everyone else without having any technological miracles to rescue you.
http://www.amazon.com/The-Visioneers-Scientists-Nanotechnologies-Limitless/dp/0691139830/
A lot of young people, including me, seem to be getting a lot of “man, we’re really living in the future” kind of emotional reactions relatively frequently. E.g. I remember that as a kid, I imagined having a Star Trek-style combined communicator and tricorder so that if someone wanted to know where I was, I could snap them a picture of my location and send it to them instantly. To me, that felt cool and science fictiony. Today, not only can even the cheapest cell phone do that, but many phones can be set up to constantly share their location to all of one’s friends.
Or back in the era of modems and dial-up Internet, the notion of having several gigabytes of e-mail storage, wireless broadband Internet, or a website hosting and streaming the videos of anyone who wanted to upload them all felt obviously unrealistic and impossible. Today everyone takes the existence of those for granted. And with Google Glass, I expect augmented reality to finally become commonplace and insert itself into our daily lives just as quickly as smartphones and YouTube did.
And since we’re talking about Google, self-driving cars!
Or Planetary Resources. Or working brain implants. Or computers beating humans at Jeopardy. Or… I could go on and on.
So the point of this comment is that I’m having a hard time imagining my 40′s and 50′s being a letdown in terms of technological change, given that by my mid-20′s I’ve already experienced more future shocks than I would ever have expected to. And that makes me curious about whether you experienced a similar amount of future shocks when you were my age?
I’m 55 and I think the present is more shocking now than it was in the 1970s and 1980s. For me, the 70s and 80s were about presaging modern times. I think the first time I could look up the card catalog at my local library, ~1986 on gopher, I began to believe viscerally that all this internet stuff and computers was going to seriously matter SOON. Within a few months of that I saw my first webpage and that literally (by which of course I mean figuratively) knocked me in to the next century. I was flabbergasted.
Part of what was so shocking about being shocked was that it was, in some wierd sense, exactly what I expected. I had played with hypercard on macs years earlier and the early web was just essentially a networked extension of that. In my science fiction youth, I had always known or believed that knowledge would be ubiquitously available. I could summarize as saying there were no electronics in Star Trek (the original) that seemed unreasonable, from talking computers, big displays, tricorders and communicators. To me, faster-than-light travel, intelligent species all over the universe that looked and acted like made-up humans, and the transporter all seemed unreasonable.
Maybe what was shocking about the webpage is that it was so GORGEOUS. I saw it on a biggish Sun workstation screen. Text was crisp and proportionally spaced black type on white background. Pictures were colorful and vibrant. hyper-text links worked really fast. The impact of actually seeing it was overwhelming compared to just believing that someday it would be there.
As a 55 year old it feels to me like we are careening towards a singularity. The depth of processing power and sensor varieties that can be used in smartphones has barely begun to be explored. Meanwhile, these continue to get more powerful, more beautiful, and with more sensors available. Google autodrive cars: of course all those ideas about building guidewires and special things into the roads is dopey. At least its dopey if you don’t have to, and google shows you don’t.
Years ago when looking at biotech I commented wonderingly to my equal-aged frriend: isn’t it amazing to think that we could be among the last generation to die. My only consolation in knowing I will probably not make it until the singularity is that the way these things go, it will probably be delayed until 2091 anyway so I won’t just miss it by a little. And meanwhile, it doesn’t take much to enjoy the sled ride down the event horizon as the proto-singularity continues to wind itself up.
Live long and prosper, my friends.
I’m 59. It didn’t seem to me as though things changed very much until the 90′s. Microwaves and transistor radios are very nice, but not the same sort of qualitative jump as getting on line.
And now we’re in a era where it’s routine to learn about extrasolar planets—admittedly not as practical as access to the web, but still amazing.
I’m not sure whether we’re careening towards a singularity, though I admit that self-driving cars are showing up much earlier than I expected.
Did anyone else expect that self-driving cars would be so much easier than natural language?
I was very surprised. I had been using Google Translate and before that Babel Fish for years, and expected them to slowly incrementally improve as they kept on doing; self-driving cars, on the other hand, had essentially no visible improvement to me in the 1990s and the 2000s essentially up to the second DARPA challenge where (to me) they did the proverbial ‘0 to 60’.
Not I—they seem like different kinds of messy. Self-driving cars have to deal with the messy, unpredictable natural world, but within a fairly narrow set of constraints. Many very simple organisms can find their way along a course while avoiding obstacles and harm; driving obviously isn’t trivial to automate, but it just seems orders of magnitude easier than automating a system that can effectively interface with the behavior-and-communication protocols of eusocial apes, as it were.
I have always expected computers that were as able to navigate a car to a typical real-world destination as an average human driver to be easier to build than computers that were as able to manage a typical real-world conversation as an average human native speaker.
That said, there’s a huge range of goalposts in the realm of “natural language”, some of which I expected to be a lot easier than they seem to be.
I had access to a basic-programmable shared time teletype ’73 & ’74, dial-up and a local IBM (we loaded cards, got printo of results) ‘74-‘78 @ swarthmore college, programmed in Fortran for Radioastronomers ‘78-’80 and so on… I always took computers for granted and assumed through that entire time period that it was “too late” to get in on the ground floor because everybody already knew.
I never realized before now how lucky I was, how little excuse I have for not being rich.
If by “expect” you mean BEFORE I knew the result? :) It is very hard to make predictions, ESPECIALLY about the future. Now I didn’t anticipate this would happen, but as it happens it seems very sensible.
Stuff we were particularly evolved to do is more complex than stuff we use our neocortex for, stuff we were not particularly evolved to do. I think we systematically underestimate how hard language is because we have all sorts of eolutionarily provided “black boxes” to help us along that we seem blind too until we try to duplicate the function outside our heads. Driving, on the other hand, we are not particularly well evolved to do, so we have had to make it so simple that even a neocortex can do it. Probably the hardest part of automated driving is bringing the situational awareness into the machine driving the car: interpreting camera images to tell what a stoplight is doing, where the other cars are and how they are moving, and so on, which all recapitulate things we are well evolved to do.
But no, automated driving before relatively natural language interfaces was a shocking result to me as well.
And I can’t WAIT to get one of those cars. Although my daughter getting her learner’s permit in half a year is almost as good ( what do I care whether google drives me around or Julia does?)
In an amazing coincidence, soon after seeing your commentI came across a Hacker News link that included this quote:
I’m not surprised that it’s easier, but I also didn’t expect to see self-driving cars that worked.
Does this imply that you expected natural language to be impossible?
You know, I actually don’t know!
It can’t literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn’t happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do—input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by “natural language” you mean something like “really good Zork-style interactive fiction parser”, that might be a bit less difficult than making a computer that can pass a high school English course. And I’m really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn’t be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and “answers” are, then the computer should have to use a video camera and microphone, too.
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
Yeah, it’s interesting the trick they used—they basically used translated books, rather than dictionaries, as their reference… that, and a whole lot of computing power.
If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don’t have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don’t.
Is Google Translate a somewhat imperfect Chinese Room?
Also, is Google Translate getting better?
/me points at cryonics.
OK, I’m a bit younger than you, though I still remember having to use a slide rule in school. And I agree, it’s been an exciting ride.
Not my impression at all. To me the ride appears full of wild surprises around every turn. In retrospect, while I did foresee one or two things that came to pass, others were totally unexpected. That’s one reason I keep pointing out on this forum that failure of imagination is one of the most pervasive and least acknowledged cognitive fallacies. There are many more black swans than we expect.
In that sense, we are living through the event horizon already. As a person trained in General Relativity, I dislike misusing this term, but there is a decent comparison here: when free-falling and crossing the event horizon of a black hole one does not notice anything special at all, it’s business as usual. There is no visible “no going back” moment at all.
In that vein, I expect the surprises, both good and bad, to continue at about the same pace for some time. I am guessing that the worst problems will be those no one thinks about now, except maybe in a sci-fi story or two, or on some obscure blog. Same with x-risk. It will not be Skynet, or nanobots, bioweapons, asteroids, but something totally out of the left field. Similarly, the biggest progress in life extension will not be due to cryo or WBE, but some other tech. Or maybe there won’t be any at all for another century.
I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don’t sound either head-in-the-sand denial of the technological change or crazy.
I wonder how you could figure out just how atypical things are now. Different than most of history, sure, most people lived in a world where you expected life parameters to be the same for your grandparents’ and grandchildren’s generations, and we definitely don’t have that now. But we haven’t had that in the first world for the last 150 years. Telegraphs, steam engines and mass manufacture were new things that caused massive societal change. Computers, nuclear power, space rockets, and figuring out that space and time are stretchy and living cells are just chemical machines were stuff that were more likely to make onlookers go “wait, that’s not supposed to happen!” than “oh, clever”.
People during the space age definitely thought they were living in the future, and contemporary stuff is still a bit tinged by how their vast projections failed to materialize on schedule. Did more people in 1965 imagine they were living in the future than people in 1975? What about people doing computer science in 1985, compared to 2005?
The space program enthusiasts mostly did end up very disappointed in their 50s, as did the people who were trying to get personal computing going using unified Lisp or SmallTalk environments that were supposed to empower users with the ability to actually program the system as a routine matter.
Following the pattern, you’d expect to get a bunch of let down aging singularitarians in the 2030s, when proper machine intelligence is still getting caught up with various implementation dead ends and can’t get funding, while young people are convinced that spime-interfaced DNA resequencing implants are going to be the future thing that will change absolutely everything, you just wait, and the furry subculture is a lot more disturbing than it used to be.
So I don’t know which it is. There seems to be more stuff from the future in peoples’ everyday lives now, but stuff from the future has been around for over a century now, so it’s not instantly obvious that things should be particularly different right now.
It may seem to have been a golden age of promise now lost, but I was there, and that isn’t how it seems to me.
As examples of computer science in 1985, the linked blog post cites the Lisp machine and ALICE. The Lisp machine was built. It was sold. There are no Lisp machines now, except maybe in museums or languishing as mementos. ALICE (not notable enough to get a Wikipedia article) never went beyond a hardware demo. (I knew Mike Reeve and John Darlington back then, and knew about ALICE, although I wasn’t involved with it. One of my current colleagues was, and still has an old ALICE circuit board in his office. I was involved with another alternative architecture, of which, at this remove, the less said the better.)
What killed them? Moore’s Law, and this was an observation that was made even back then. There was no point in designing special purpose hardware for better performance, because general purpose hardware would have doubled its speed before long and it would outperform you before you could ever get into production. Turning up the clock made everything faster, while specialised hardware only made a few things faster.
Processors stopped getting faster in 2004 (when Intel bottled out of making 4GHz CPUs). The result? Special-purpose hardware primarily driven not by academic research but by engineers trying to make stuff that did more within that limit: GPUs for games and server farms for the web. Another damp squid of the 1980s, the Transputer, can be seen as ancestral to those developments, but I suspect that if the Transputer had never been invented, the development of GPUs would be unaffected.
When it appears, as the blog post says, “that all you must do to turn a field upside-down is to dig out a few decades-old papers and implement the contents”, well, maybe a geek encountering the past is like a physicist encountering a new subject. OTOH, he is actually trying to do something, so props to him, and I hope he succeeds at what could not be done back then.
Thinking a bit more of this, I think the basic pattern I’m matching here is that each era there’s some grand technocratic narrative where an overarching first-principles design from the current Impressive Technology, industrial production, rocket engines, internetworked computers, or artificial intelligence, will produce a clean and ordered new world order. This won’t happen, and instead something a lot more organic, diffuse, confusing, low-key and wildly unexpected will show up.
On the other hand, we don’t currently seem to be having the sort of unified present-day tech paradigm like there was during the space age. My guess for the next big tech paradigm thing would be radical biotechnology and biotech-based cognitive engineering, but we don’t really have either of those yet. Instead, we’ve got Planetary Resources and Elon Musk doing the stuff of the space age folk, Bitcoin and whatnot that’s something like the 90s cypherpunks thought up, IBM Watson and Google cars are something that AI was supposed to deliver in the 80s before the AI Winter set in, and we might be seeing a bit of a return to 80s style diverse playing field in computing with stuff like Raspberry PI, 3D printing and everybody being able to put their apps online and for sale without paying for brick & mortar shelf space.
So it’s kinda like all the stuff that was supposed to happen any time now at various points of the late 20th century was starting to happen at once. But that could be just the present looking like it has a lot more stuff than the past, since I’m seeing a lot less of the past than the present.
You know, that’s a good description of my reaction to reading Brin’s Existence the other day. I think 10 years is not that revolutionary, but at 50+ years, the dichotomy is getting pretty bad.
I’m 33, and same here.
I like to point out the difference between the time I think of something cool and the time it is invented. In general, that time has been usually negative for a number of years now. As a trivial silly example, after hearing the Gangnam Style song, I said “I want to see the parody video called ‘Gungan style’ about Star Wars.” (I just assumed it exists already). While there were indeed several such videos, the top result was instead a funnier video making fun of the concept of making such a parody video.
If we’re living in the future, when is the present?
We just missed it.
The time at which classical images of “the future” were generated and popularized.
On the other hand, we do have nanomachines, which can do a number of interesting things, and we didn’t have them a couple decades ago. We’re making much more tangible progress towards versatile nanotechnology than we are towards space colonization.
It seems that both Taleb and Aaronde are talking about a much smaller scale change than things like space colonization and general nanotech.
Yeah, that was my impression. One of the things that’s interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it’s hard to tell when he’s criticizing futurists for expecting certain new technologies, and when he’s criticizing them for wanting those new technologies. When he says that he’s going to take a cab driven by an immigrant, is he saying that robot cars won’t arrive any time soon? Or that it wouldn’t make a difference if they did? Or that it would be bad if they did? I think his point is a bit muddled.
One thing he gets right is that cool new technologies need not be revolutionary. Don’t get me wrong; I take the possibility of truly transformative tech seriously, but futurists do overestimate technology for a simple reason. When imagining what life will be like with a given gadget, you focus on those parts of your life when you could use the gadget, and thus overestimate the positive effect of the gadget (This is also why people’s kitchens get cluttered over time). For myself, I think that robot cars will be commonplace in ten years, and that will be friggin’ awesome. But it won’t transform our lives—it will be an incremental change. The flip side is that Taleb may underestimate the cumulative effect of many incremental changes.
I’m 45 (edit: 46) and think the modern age is simply goddamn fantastic.
So why don’t we see an inverse Maes-Garreau effect, where predictors upon hitting their 40-50s are suddenly letdown and disenchanted and start making predictions for centuries out, rather than scores of years?
And what would you predict for the LW survey results? All 3 surveys ask for the age of the respondent, so there’s plenty of data to correlate against, and we should be able to see any discouragement in the 40-50syo respondents.