It’s always possible that running what we think of as a human intelligence requires a lot less actual computation than we seem to generally assume. We could already have all the hardware we need and not realize it.
I remember reading somewhere that many computer applications are accelerating much faster than Moore’s law because we’re inventing better algorithms at the same time that we’re inventing faster processors. The thing about algorithms is that you don’t usually know that there’s a better one until somebody discovers it.
Kurzweil has an example of a task with 43,000x speedup over some period, more than Moore’s Law, that is often mentioned in these discussions, and might be what you’re thinking of. It was for one very narrow task, cherrypicked from a paper as the one with by far the greatest improvement. It’s an extremely unrepresentative sample selected for rhetorical effect. Just as Kurzweil resolves ambiguity overwhelmingly in his favor in evaluating his predictions, he selects the most extreme anecdotes he can find. On the other hand, in computer chess and go software progress seems to have been on the same order as Moore’s law too.
ETA: there were still improvements of many thousandfold over the period considering the rest of the paper.
Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.”
I agree this 43k improvement is not representative of algorithms research in general (sorting is not 43k faster than in the 1960s, for example), but let’s not call it ‘very narrow’: linear programming optimization (and operations research in general) is important and used all over the place in numerous applications in every industry. We owe a good deal of our present wealth to operations research and linear programming.
There have been great improvements in linear programming overall, but the paper talked about applications to many areas, and Kurzweil cited the one with the greatest realized speedup, which was substantially unrepresentative.
One should also at least ponder an unexpected quick way to the intelligence explosion.
Maybe not a very probable, but a possible outcome. Might be odd, it hasn’t happened already, like a dropped bomb which has not detonated. Yet.
I know. It is 1 percent or there about possibility, still it should be examined.
It’s always possible that running what we think of as a human intelligence requires a lot less actual computation than we seem to generally assume. We could already have all the hardware we need and not realize it.
I remember reading somewhere that many computer applications are accelerating much faster than Moore’s law because we’re inventing better algorithms at the same time that we’re inventing faster processors. The thing about algorithms is that you don’t usually know that there’s a better one until somebody discovers it.
Kurzweil has an example of a task with 43,000x speedup over some period, more than Moore’s Law, that is often mentioned in these discussions, and might be what you’re thinking of. It was for one very narrow task, cherrypicked from a paper as the one with by far the greatest improvement. It’s an extremely unrepresentative sample selected for rhetorical effect. Just as Kurzweil resolves ambiguity overwhelmingly in his favor in evaluating his predictions, he selects the most extreme anecdotes he can find. On the other hand, in computer chess and go software progress seems to have been on the same order as Moore’s law too.
ETA: there were still improvements of many thousandfold over the period considering the rest of the paper.
I agree this 43k improvement is not representative of algorithms research in general (sorting is not 43k faster than in the 1960s, for example), but let’s not call it ‘very narrow’: linear programming optimization (and operations research in general) is important and used all over the place in numerous applications in every industry. We owe a good deal of our present wealth to operations research and linear programming.
There have been great improvements in linear programming overall, but the paper talked about applications to many areas, and Kurzweil cited the one with the greatest realized speedup, which was substantially unrepresentative.
This has more numbers.