Kurzwell is completely inept at making predictions from his graphs. He is usually quite wrong in a very naive way. For example, one of his core predictions is when we will achieve human-level AI based on (IIRC) nothing more than when a computer with a number of transistors equal to neurons in the human brain could be bought off-the-shelf for $1000. As if that line in the sand had anything at all to do with making AGI.
But his exponential chart about transistors/$ is simply raw data, and the extrapolation is a straightforward prediction that has held true. He has another chart on the topic of manipulatable feature sizes using various approaches, and that also shows convergence on nanometer-resolution in the 2035-2045 timeframe. I trust this in the same way that I trust his charts about Moore’s law: it’s not a law of nature, but I wouldn’t bet against it either.
Kurzwell is completely inept at making predictions from his graphs. He is usually quite wrong in a very naive way. For example, one of his core predictions is when we will achieve human-level AI based on (IIRC) nothing more than when a computer with a number of transistors equal to neurons in the human brain could be bought off-the-shelf for $1000. As if that line in the sand had anything at all to do with making AGI.
But his exponential chart about transistors/$ is simply raw data, and the extrapolation is a straightforward prediction that has held true. He has another chart on the topic of manipulatable feature sizes using various approaches, and that also shows convergence on nanometer-resolution in the 2035-2045 timeframe. I trust this in the same way that I trust his charts about Moore’s law: it’s not a law of nature, but I wouldn’t bet against it either.