How relevant are the lessons from Megamistakes to forecasting today?
Disclaimer: This post contains unvetted off-the-cuff thoughts. I’ve included quotes from the book in a separate quote dump post to prevent this post from getting too long. Read the intro and the TL;DR if you want a quick idea of what I’m saying.
As part of a review of the track record of forecasting and the sorts of models used for it, I read the book Megamistakes: Forecasting and the Myth of Rapid Technological Change (1989) by Steven P. Schnaars (here’s a review of the book by the Los Angeles Times from back when it was published). I conducted my review in connection with contract work for the Machine Intelligence Research Institute, but the views expressed here are solely mine and have not been vetted by MIRI. Note that this post is not a full review of the book. Instead, it simply discusses some aspects of the book I found relevant.
The book is a critique of past forecasting efforts. The author identifies many problems with these forecasting efforts, and offers suggestions for improvement. But the book was written in 1989, when the Internet was just starting out and the World Wide Web didn’t exist. Thus, the book’s suggestions and criticisms may be outdated in one or more of these three ways:
Some of the suggestions in the book were mistaken, and this has become clearer based on evidence gathered since the publication of the book: I don’t think the book was categorically mistaken on any count. The author was careful to hedge appropriately in cases where the evidence wasn’t very strongly in a particular direction. But point #1 below is in the direction of the author not giving appropriate weight to a particular aspect of his analysis.
Some of the suggestions or criticisms in the book don’t apply today because the sorts of predictions being made today are of a different nature: We’ll argue this to be the case in #2 below.
Some of the suggestions in the book are already implemented routinely by forecasters today, so they don’t make sense as criticisms even though they continue to be valid guidelines. We’ll argue this to be the case in #3 below.
I haven’t been able to locate any recent work of the author where he assesses his own work in light of new evidence; if any readers can find such material, please link to it in the comments.
TL;DR
A number of the technologies that Schnaars notes were predicted to happen before 1989 and didn’t, have in fact happened since then. This doesn’t contradict anything Schnaars wrote. In fact, it agrees with many of his claims. But it does seem to be connotatively different from the message that Schnaars seems to be keen on pushing in the book. It seems that the main issues with many predictions is one of timing, rather than of fundamental flaws in the vision of the future being suggested. For instance, in the realm of concerns about unfriendly AI, it may be that the danger of AGI will be imminent in 2145 AD rather than 2045 AD, but the basic concerns espoused by Yudkowsky could still be right.
Schnaars does note that trends related to computing are the exceptions to technological forecasting being way too optimistic: Computing-related trends seem to him to often be right or only modestly optimistic. In 1989, the exceptional nature of computing may have seemed like only a minor point in a book about many other failed technological forecasts. In 2014, the point is anything but minor. To the extent that there are systematic reasons for computing being different from the other technological realms where Schnaars notes a bad track record of forecasting, his critique isn’t too relevant. The one trend that grows exponentially, in line with bullish expectations, will come to dominate the rest eventually. And to the extent that software eats the world, it could spill over into other trends as well.
A lot of the suggestions offered by Schnaars (particularly suggestions on diversification, field testing, and collecting feedback) are routinely implemented by many top companies today, and even more so by the top technology companies. This isn’t necessarily because they read him. It’s probably largely because it’s a lot easier to implement those suggestions in today’s world with the Internet.
#1: The criticism of “technological wonderland”: it’s all about timing, honey!
Schnaars is critical of forecasters for being too enamored with the potential of a technology and replacing hard-nosed realism with wishful thinking based on what they’d like the technology to do. Two important criticisms he makes in this regard are:
Forecasters often naively extrapolate price-performance curves, ignoring both economic and technological hurdles.
Forecasters often focus more on what is possible rather than what people actually want as consumers. They ignore the fact that new product ideas that sound cool may not deliver enough value to end users to be worth the price tag.
The criticism remains topical today. Futurists today often extrapolate trends such as Moore’s law far into the future, to the point where there’s considerable uncertainty both surrounding the technological feasibility and the economic incentives. A notable example here is Ray Kurzweil, well-known futurist and author of The Singularity is Near. Kurzweil’s prediction record is decidedly mixed. An earlier post of mine included a lengthy discussion of the importance of economic incentives in facilitating technological improvement. I’d drafted that post before reading Megamistakes, and the points I make there aren’t too similar to the specific points in the book, but it is in the same general direction.
Schnaars notes, but in my view, gives insufficient emphasis to the following point: Many of the predictions he grades aren’t fundamentally misguided at a qualitative level. They’re just wrong on timing. In fact, a number of them have been realized in the 25 years since. Some others may be realized over the next 25 years, and yet more may be realized over the next 100 years. And some may be realized centuries from now. What the predictions got wrong was timing, in the following two senses:
Due to naive extrapolation of price-performance curves, forecasters underestimate the time needed to attain specific price-performance milestones. For instance, they might think that you’d get a certain kind of technological product for $300 by 1985, but it might actually come to market at that price only in 2005.
Because of their own obsession with technology, forecasters overestimate the reservation prices (i.e., the maximum price at which consumers are willing to buy a technological product). Thus, even when a particular price-performance milestone is attained, it fails to lead to the widespread use of the technology that forecasters had estimated.
The gravity you assign to this error depends heavily on the purpose of the forecast. If it’s for a company deciding whether to invest a few million dollars in research and development, then being off by a couple of decades is a ruinous proposition. If you’re trying to paint a picture of the long term future, on the other hand, a few decades here and there need not be a big deal. Schnaars seems to primarily be addressing the first category.
Schnaars makes the point about timing in more detail here (pp. 120-121) (emphasis mine):
A review of past forecasts for video recorders and microwave ovens illustrates the length of time required for even the most successful innovations to diffuse through a mass market. It also refutes the argument that we live in times of ever faster change. Both products were introduced into commercial markets shortly after World War II. Both took more than twenty years to catch fire in a large market. The revolution was characterized more by a series of fits and starts than by a smooth unfolding pattern. The ways in which they achieved success suggests something other than rapid technological change. Progress was slow, erratic, and never assured. And this applies to two of the most successful innovations of the past few decades! The record for less successful innovations is even less impressive.
The path to success for each of those products was paved with a mix of expected and unexpected events. First, it was widely known that to be successful it was necessary to get costs down. But as costs fell, other factors came into play. Microwave ovens looked as if they were going to take off in the late 1960s, when consumer advocates noted that the ovens leaked radiation when dropped from great heights. The media dropped the “great heights” part of the research, and consumers surmised that they would be purchasing a very dangerous product. Consumers decided to cook with heat for a few years longer. Similarly, the success of video recorders is usually attributed to Sony’s entry with Betamax in the mid-1970s. But market entries went on for years with the video recorder. Various interpretations of the product were introduced onto the market throughout the 1960s and 1970s. A review of these entries clearly reveals that the road to success for the VCR was far more rocky than the forecasts implied. Even for successful innovations, which are exceptions to begin with, the timing of market sucess and the broad path the product will follow are often obscured from view.
One example where Schnaars notes that timing is the main issue is that of fax machines (full quote in the quote dump)
Here are some technologies that Schnaars notes as failed predictions, but that have, in the intervening years (189-2014), emerged in roughly the predicted form. Full quotes from the book in the quote dump.
Computerphones (now implemented as smartphones, though the original vision was of similar phones as landline phones rather than mobile phones).
Picture phones (specifically the AT&T PicturePhone(now implemented as smartphones and also as computers with built-in webcams, though note again that the original vision involved landline phones). See Wikipedia for more.
Videotex (an early offering whose functionality is now included in GUI-based browsers accessing the World Wide Web and other Internet services).
An interesting general question that this raises, and that I don’t have an offhand answer to, is whether there is a tradeoff between having a clear qualitative imagination about what a technology might look like once matured, and having a realistic sense of what will happen in the next few years. If that’s the case, the next question would be what sort of steps the starry-eyed futurist types can take to integrate realistic timing into their vision, and/or how people with a realistic sense of timing can acquire the skill of imagining the future without jeopardizing their realism about the short term.
#2: Computing: the exception that eviscerates the rule?
Schnaars acknowledges computing as the exception (pp. 123-124) (emphasis mine, longer version of quote in the quote dump):
Most growth market forecasts, especially those for technological products, are grossly optimistic. The only industry where such dazzling predictions have consistently come to friution is computers. The technological advances in this industry and the expansion of the market have been nothing short of phenomenal. The computer industry is one of those rare instances, where optimism in forecasting seems to have paid off. Even some of the most boastful predictions have come true. In other industries, such optimistic forecasts would have led to horrendous errors. In computers they came to pass.
[...]The most fascinating aspect of those predictions is that in almost any other industry they would have turned out to be far too optimistic. Only in the computer industry did perpetual boasting turn out to be accurate forecasting, until the slowdown of the mid-1980s.
The tremendous successes in the computer industry illustrate an important point about growth market forecasting. Accurate forecasts are less dependent on the rate of change than on the consistency and direction of change. Change has been rampant in computers; but it has moved the industry consistently upward. Technological advances have reduced costs, improved performance, and, as a result, expanded the market. In few other industries have prices declined so rapidly, opening up larger and larger markets, for decades. Consequently, even the most optimistic predictions of market growth have been largely correct. In many slower growth industries, change has been slower but has served to whipsaw firms in the industry rather than push the market forward. In growth market forecasting, rapid change in one direction is preferable to smaller erratic changes.
This is about the full extent to which Schnaars discusses the case of computing. His failure to discuss it deeper seems like a curious omission. In particular, I would have been curious to see if he had an explanation for why computing has turned out so different, and whether this was due to the fundamental nature of computing or just a lucky historical accident. Further, to the extent that Schnaars believed that computing was fundamentally different, how did he fail to see the long-run implications in terms of how computing would eventually become a dominating factor in all forms of technological progress?
So what makes computing different? I don’t have a strong view, but I think that the general-purpose nature and wide applicability of computing may have been critical. A diverse range of companies and organizations knew that they stood to benefit from the improvement of computing technology. This gave them greater incentives to pool and share larger amounts of resources. Radical predictions, such as Moore’s law, were given the status of guidelines for the industry. Moreover, improvements in computing technology affected the backend costs of development, and the new technologies did not have to be sold to end consumers. So end consumers’ reluctance to change habits was not a bottleneck to computing progress.
Contrast this with a narrower technology such as picture phones. Picture phones were a separate technology developed by a phone company, whose success heavily depended on what that company’s consumers wanted. Whether AT&T succeeded or failed with the picture phone, most other companies and organizations didn’t care.
Indeed, when the modern equivalents of picture phones, computerphones, and Videotex finally took off, they did so as small addenda to a thriving low-cost infrastructure of general-purpose computing.
The lessons from Megamistakes suggest that converting the technological fruits of advances into computing into products that consumers use can be a lot more tricky and erratic than simply making advances in computing.
I also think there’s a strong possibility that the accuracy of computing forecasts may be declining, and that the problems that Schnaars outlines in his book (namely, consumers not finding the new technology useful) will start biting computing. For more, see my earlier post.
#3: Main suggestions already implemented nowadays?
Some of the suggestions that Schnaars makes on the strategy front are listed in Chapter 11 (Strategic Alternatives to Forecasting) and include:
Robust Strategies: If a firm cannot hope to ascertain what future it will face, it can develop a strategy that is resilient no matter which of many outcomes occurs (p. 163).
Flexible Strategies: Another strategy for dealing with an uncertain future is to remain flexible until the future becomes clearer (p. 165).
Multiple Coverage Strategies: Another alternative to forecasting growth markets is to pursue many projects simultaneously (p. 167).
I think that (2) and (3) in particular have increased a lot in the modern era, and (1) has too, though less obviously. This is particularly true in the software and Internet realm, where one can field-test many different experiments over the Internet. But it’s also true for manufacturing, as better point-of-sale information and a supply chain that records information accurately at every stage allows for rapid changes to production processes (cf. just in time manufacturing). The example of clothing retailer Zara is illustrative: they measure fashion trends in real time and change their manufacturing choices in response to these trends. In his book Everything is Obvious: *Once You Know the Answer, Duncan Watts uses the phrase “measure and react” for this sort of strategy.
Other pieces of advice that Schnaars offers, that I think are being followed to a greater extent today than back in his time, partly facilitated by greater information flow and more opportunities for measurement, collaboration, and interaction:
Start Small: Indeed, a lot of innovation today is either done by startups or by big companies trying out small field tests of experimental products. It’s very rarely the case that a company invests a huge amount in something before shipping or field-testing it. Facebook started out at Harvard in February 2004 and gradually ramped up to a few other universities, and only opened to the general public in September 2006 (see their timeline).
Take Lots of Tries: The large numbers of failed startups as well as shelved products in various “labs” of Google, Facebook, and other big companies are testimony to this approach.
Enter Big: Once something has been shown to work, the scaling up can be very rapid in today’s world, due to rapid information flows. Facebook got to a billion users in under a decade of operation. When they roll out a new feature, they can start small, but once the evidence is in that it’s working, they can roll it out to everybody within months.
Setting Standards of Uniformity: It’s easier than before to publicly collaborate in an open fashion on standards. There are many successful examples that form the infrastructure of the Internet, most of them based on open source technologies. Some recent examples of successful collaborative efforts include Schema.org (between search engines), OpenID (between major Internet email ID providers and other identity providers such as Facebook), Internet.org (between Facebook and cellphone manufacturing companies), and the Open Compute Project.
Developing the Necessary Infrastructure: Big data companies preemptively get new data center space before the need for it starts kicking in. Data center space is particularly nice because server power and data storage are needed for practically all their operations, and therefore are agnostic to what specific next steps the companies will take. This fits in with the “Flexible Strategy” idea.
Ensuring a Supply of Complementary Products: This isn’t uniformly followed, but arguably the most successful companies have followed it. Google expanded into Maps, News, and email long before people were clamoring for it. They got into the phone operating system business with Android and the web browser business with Chrome. Facebook has been more focused on its core business of social networking, but it too has been supporting complementary initiatives such as internet.org to boost global Internet connectivity.
Lowering Prices: Schnaars cites the example of Xerox, that sidestepped the problem of the high prices of machines by leasing them instead of selling them. Something similar is done in the context of smartphones today.
Schnaars’ closing piece of advice is (p. 183):
Assume that the Future Will Be Similar to the Present
Is this good advice, and are companies and organizations today following it? I think it’s both good advice and bad advice. On the one hand, Google was able to succeed with GMail because they correctly forecast that disk space would soon be cheap enough to make GMail economical. In this case, it was their ability to see the future as different from the present that proved to be an asset. Similarly, Paul Graham describes good startup ideas as ones created by people who live in the future rather than the present.
At the same time, the best successes do assume that the future won’t look physically too different from the present. And unless there is a strong argument in favor of a particular way in which the future will look different, planning based on the present might be the best one can hope for. GMail wasn’t based on a fundamental rethinking of human behavior. It was based on the assumption that most things would remain similar, but Internet connectivity and bandwidth would improve and disk space costs would reduce. Both assumptions were well-grounded in the historical record of technology trends, and both were vindicated by history.
Thanks to Luke Muehlhauser (MIRI director) for recommending the book and to Jonah Sinick for sending me his notes on the book. Neither of them have vetted this post.
Quote dump
To keep the main post short, I’m publishing a dump of relevant quotes from the book separately, in a quote dump post.
- Beware technological wonderland, or, why text will dominate the future of communication and the Internet by 13 Apr 2014 17:34 UTC; 20 points) (
- Scenario analyses for technological progress for the next decade by 14 Jul 2014 16:31 UTC; 14 points) (
- Tentative tips for people engaged in an exercise that involves some form of prediction or forecasting by 30 Jul 2014 5:24 UTC; 14 points) (
- Domains of forecasting by 9 Jul 2014 13:45 UTC; 11 points) (
- [QUESTION]: Driverless car forecasts by 11 Jul 2014 0:25 UTC; 11 points) (
- Scenario planning, its utility, and its relationship with forecasting by 2 Jul 2014 2:32 UTC; 10 points) (
- Different time horizons for forecasting by 16 Apr 2014 3:30 UTC; 3 points) (
- Quote dump for “megamistakes” by 12 Apr 2014 4:53 UTC; 2 points) (
As a side note, it’s interesting how (some) companies are doing more of this general principle through not investing in computing infrastructure, lately—cloud computing providers buy the just-in-case servers, so the market as a whole has some computing power held in reserve, but not for any particular company.
I wonder if MegaMistakes applies to enthusiasm about 3D printing.
3D printing a house or airplane component is a specialty service that has value to a limited set manufacturers. The cost of this scale of 3D printing could be cut several times, and only pass the price-performance threshold for a few more special cases.
On the low end, hobby-printers and remote printing services are borderline useless. There is virtually NO value proposition beyond “3D Printing is cool! Here’s a neat widget that you couldn’t get from most traditional manufacturers”.
When I can have a 3D printer in my home that can easily identify then print many things I need in typical household or automotive repairs, and do it cheaply and simply, I might buy it. Or if I could subscribe to genuinely neat widget designs and produce them at home for a reasonable price (and easily), that’d be cool too.
But until then, I will not buy a 3D printer. And most people won’t either.
My personal view (not worth much, since I haven’t looked into this closely): in about 20-25 years, most people in cities and big towns in the First World will have access (within their city or region) to 3D printer services. There may be dedicated 3D print stores, or 3D printers may be available in copy shops or stationery stores or Internet kiosk-type places. I still suspect that people won’t use them for things they can get online or at nearby physical stores. They might use 3D printing for specialized production, custom stuff (like a custom gift for a loved one), or if they’re in a real hurry and can’t wait to have something delivered.
Further, it’ll probably take another 20-25 years after that for 3D printing at home to be ubiquitous. It’s also possible that home 3D printing will never become ubiquitous.
As mentioned above, I haven’t looked into this closely enough to have high confidence in my views.