Long-Term Technological Forecasting
When will AGI be created? When will WBE be possible? It would be nice to have somewhat reliable methods of long-term technological forecasting. Do we? Here’s my own brief overview of the subject...
Nagy et al. (2010) is, I think, the best paper in the field. At first you might think it’s basically supporting Kurzweillian conclusions about exponential curves for all technologies, but there are serious qualifications to make. The first is that the prediction laws they tested are linear regression models, which fit the data well but are not theoretically appropriate for modeling the data because the assumptions of independence and so on are not satisfied. A second and bigger qualification is that Nagy & company only used data from small time slices for most technologies examined in the paper. This latter problem becomes a larger source of worry when you note that we have reason to expect many technologies to follow a logistic rather than exponential growth pattern, and exponential and logistic growth patterns look the same for the first part of their curves — see Modis (2006). A third qualification is that Nagy’s performance curves database is not representative of “technology in general” or anything like that. Fourth, Nagy’s study is the first of its kind, not a summary of 20 years of careful work all leading to a shared set of conclusions we can be fairly confident about. The hedonic hotspots that fire in my brain when I engage in hyperbole want me to say that serious long-term technological forecasting is not summarized by Nagy but begins with Nagy. (But that, of course, compresses history too much.)
Williams (2011) demonstrates that prediction markets just aren’t yet tested in the domain of long-term forecasting, and have several incentive-structure problems yet to be worked out. I bought the book, but it’s probably not worth $125. If you go to the library and want to copy just one chapter, make it Croxson’s.
I see basically no evidence that any expert elicitation method is reliable for long-term technological forecasting (e.g. see Rowe & Wright 2001). The first study to show positive results from expert elicitation (in this case, a particular version of the Delphi method) for long-term forecasting is a single paper from last year: this one.
So if you want to predict the future of technology, it’s best not to tell detailed stories. Instead, you’ll want to focus on “disjunctive” outcomes that, like the evolution of eyes or the emergence of markets, can come about through many different paths and can gather momentum once they begin. Humans actually tend to intuitively underestimate the likelihood of such convergent outcomes (Tversky and Kahneman 1974).
Rowe & Wright 2001 is from Principles of Forecasting, an ebook of which is online: http://vmg.pp.ua/books/%D0%92%D0%B5%D1%80%D0%BE%D1%8F%D1%82%D0%BD%D0%BE%D1%81%D1%82%D1%8C%20%D0%B8%20%D0%BC%D0%B0%D1%82%D1%81%D1%82%D0%B0%D1%82%D0%B8%D1%81%D1%82%D0%B8%D0%BA%D0%B0/%D0%9F%D1%80%D0%B5%D0%B4%D1%81%D0%BA%D0%B0%D0%B7%D0%B0%D0%BD%D0%B8%D0%B5/Principles%20of%20Forecasting.pdf
(Seems worth reading.)
Quick related question that you might know the answer to: are you aware of any research testing the accuracy (in predicting technological progress) of statistical prediction rules trained on the the judgments of experts? As you know from Bishop & Trout, these can do better than the experts themselves at integrating information, even when the experts are very inaccurate.
“When will AGI be created?”
I’m not sure this means very much. How would we be able to tell?
Computers are already far superior to humans for many tasks. I expect more of the same in the future, with computers being delegated to take on increasingly complex tasks. I don’t however see that any “singularity” is likely—rather a relatively smooth progression from what is possible today towards more difficult problems that can be solved in the future.
Even supposing computers were to advance to a state of “intelligence” where they could say invent interesting new mathematics, I’m not sure that this would have any profound consequences, any more than a chess playing computer that can beat a human has any profound consequences.
It’s possible to imagine that a very powerful “intelligent” computer could somehow run amok, but we are so far from such a possibility that it hardly seems worth worrying about it now. I’d worry more about human dangers ( fascism, totalitarian regimes ) since they seem to appear and become dangerous quite frequently. For example, should we be worried about China?