I don’t have a source on this, but I remember an anecdote from Kurzweil that scientists who worked on early transistors were extremely skeptical about the future of the technology. They were so focused on solving specific technical problems that they didn’t see the big picture. Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
So that’s why I wouldn’t trust various ML experts like Ng that have said not to worry about AGI. No, the specific algorithms they work on are not anywhere near human level. But the general trend, and the proof that humans aren’t really that special, is concerning.
I’m not saying that you should just trust Yudkowsky or me instead. And expert opinion still has value. But maybe pick an expert that is more “big picture” focused? Perhaps Jürgen Schmidhuber, who has done a lot of notable work on deep learning and ML, but also has an interest in general intelligence and self improving AIs.
And I don’t have any specific prediction from him on when we will reach AGI. But he did say last year that he believes we will reach monkey level intelligence in 10 years. Which is quite a huge milestone.
Another candidate might be the group being discussed in this thread, Deepmind. They are focused on reaching general AI instead of just typical machine vision work. That’s why they have such a strong interest in game playing. I don’t have any specific predictions from them either, but I do get the impression they are very optimistic.
Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
I’m not buying this.
There are tons of cases where people look at the current trend and predict it will continue unabated into the future. Occasionally they turn out to be right, mostly they turn out to be wrong. In retrospect it’s easy to pick “winners”, but do you have any reason to believe it was more than a random stab in the dark which got lucky?
If you were trying to predict the future of flight in 1900, you’d do pretty terrible by surveying experts. You would do far better by taking a Kurzweil style approach where you put combustion engine performance on a chart and compared it to estimates of the power/weight ratios required for flight.
The point of that comment wasn’t to praise predicting with trends. It was to show an example where experts are sometimes overly pessimistic and not looking at the big picture.
When people say that current AI sucks, and progress is really hard, and they can’t imagine how it will scale to human level intelligence, I think it’s a similar thing. They are overly focused on current methods and their shortcomings and difficulties. They aren’t looking at the general trend that AI is rapidly making a lot of progress. Who knows what could be achieved in decades.
I’m not talking about specific extrapolations like Moore’s law, or even imagenet benchmarks—just the general sense of progress every year.
This claim doesn’t make much sense from the outset. Look at your specific example of transistors. In 1965, an electronics magazine wanted to figure out what would happen over time with electronics/transistors so they called up an expert, the director of research of Fairchild semiconductor. Gordon Moore (the director of research), proceeded to coin Moore’s law and tell them the doubling would continue for at least a decade, probably more. Moore wasn’t an outsider, he was an expert.
I never said that every engineer at every point in time was pessimistic. Just that many of them were at one time. And I said it was a second hand anecdote, so take that for what it’s worth.
You have to be more specific with the timeline. Transistors were invented in 1925 but received little interests due to many technical problems. It took three decades of research before the first commercial transistors were produced by Texas Instruments in 1954.
Gordon Moore formulated his eponymous law in 1965, while he was director of R&D at Fairchild Semiconductor, a company whose entire business consisted in the manufacture of transistors and integrated circuits. By that time, tens of thousands transistor-based computers were in active commercial use.
It wouldn’t have made a lot of sense to predict any doublings for transistors in an integrated circuit before 1960, because I think that is when they were invented.
I don’t have a source on this, but I remember an anecdote from Kurzweil that scientists who worked on early transistors were extremely skeptical about the future of the technology. They were so focused on solving specific technical problems that they didn’t see the big picture. Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
So that’s why I wouldn’t trust various ML experts like Ng that have said not to worry about AGI. No, the specific algorithms they work on are not anywhere near human level. But the general trend, and the proof that humans aren’t really that special, is concerning.
I’m not saying that you should just trust Yudkowsky or me instead. And expert opinion still has value. But maybe pick an expert that is more “big picture” focused? Perhaps Jürgen Schmidhuber, who has done a lot of notable work on deep learning and ML, but also has an interest in general intelligence and self improving AIs.
And I don’t have any specific prediction from him on when we will reach AGI. But he did say last year that he believes we will reach monkey level intelligence in 10 years. Which is quite a huge milestone.
Another candidate might be the group being discussed in this thread, Deepmind. They are focused on reaching general AI instead of just typical machine vision work. That’s why they have such a strong interest in game playing. I don’t have any specific predictions from them either, but I do get the impression they are very optimistic.
I’m not buying this.
There are tons of cases where people look at the current trend and predict it will continue unabated into the future. Occasionally they turn out to be right, mostly they turn out to be wrong. In retrospect it’s easy to pick “winners”, but do you have any reason to believe it was more than a random stab in the dark which got lucky?
If you were trying to predict the future of flight in 1900, you’d do pretty terrible by surveying experts. You would do far better by taking a Kurzweil style approach where you put combustion engine performance on a chart and compared it to estimates of the power/weight ratios required for flight.
The point of that comment wasn’t to praise predicting with trends. It was to show an example where experts are sometimes overly pessimistic and not looking at the big picture.
When people say that current AI sucks, and progress is really hard, and they can’t imagine how it will scale to human level intelligence, I think it’s a similar thing. They are overly focused on current methods and their shortcomings and difficulties. They aren’t looking at the general trend that AI is rapidly making a lot of progress. Who knows what could be achieved in decades.
I’m not talking about specific extrapolations like Moore’s law, or even imagenet benchmarks—just the general sense of progress every year.
This claim doesn’t make much sense from the outset. Look at your specific example of transistors. In 1965, an electronics magazine wanted to figure out what would happen over time with electronics/transistors so they called up an expert, the director of research of Fairchild semiconductor. Gordon Moore (the director of research), proceeded to coin Moore’s law and tell them the doubling would continue for at least a decade, probably more. Moore wasn’t an outsider, he was an expert.
You then generalize from an incorrect anecdote.
I never said that every engineer at every point in time was pessimistic. Just that many of them were at one time. And I said it was a second hand anecdote, so take that for what it’s worth.
You have to be more specific with the timeline. Transistors were invented in 1925 but received little interests due to many technical problems. It took three decades of research before the first commercial transistors were produced by Texas Instruments in 1954.
Gordon Moore formulated his eponymous law in 1965, while he was director of R&D at Fairchild Semiconductor, a company whose entire business consisted in the manufacture of transistors and integrated circuits. By that time, tens of thousands transistor-based computers were in active commercial use.
It wouldn’t have made a lot of sense to predict any doublings for transistors in an integrated circuit before 1960, because I think that is when they were invented.