I looked at 50 technologies taken from a Wikipedia list History of technology, which I expect to provide a mostly random list of technologies. Of these 50 technologies, I think that 19 have a discontinuity, 13 might have one, and 18 probably don’t. Of these, I’d call 12 “big” discontinuities, for an initial probability estimate of 12/50=24% I provide other estimates in the “More elaborate models for computing the base rate of big discontinuities.”
Unlike some previous work by AI Impacts (or, for that matter, by myself),
I am able to produce something which looks like a prior because I consider a broad bag of different technologies, and then ask which proportion have discontinuities. Previous approaches have specifically looked for discontinuities and found examples, thereby not being able to estimate their prevalence.
The broad bag of technologies I draw from was produced by Wikipedia editors who followed their own designs. They most likely weren’t thinking in terms of discontinuities, and won’t have selected for them. However, these editors might still have been subject to availability bias, Anglicism bias, etc. This might make the dataset mildly imperfect, that is, not completely representative of all possible technologies, but I’d say that most likely it’s still good enough.
Furthermore, I didn’t limit myself to discontinuities which are easily quantifiable or for which data is relatively easy to gather; instead I quickly familiarized myself with each technology in my list, mostly by reading the Wikipedia entry, and used my best judgement as to whether there was a discontinuity. This method is less rigorous than previous work, but doesn’t fall prey to Goodhart’s law: I want a prior for all discontinuities, not only for the quantifiable ones, or for the ones for which there is numerical data.
However, this method does give greater weight to my own subjective judgment. In particular, I suspect that I, being a person with an interest in technological discontinuities, might produce a higher rate of false positives. One could dilute this effect by pooling many people’s assessments, like in Assessing Kurzweil’s predictions for 2019.
All data is freely available here. While gathering it, I came across some somewhat interesting anecdotes, some of which are gathered in this shortform.
Many thanks to Misha Yagudin, Gavin Leech and Jaime Sevilla for feedback on this document.
Table of contents
Introduction
Discontinuity stories
More elaborate probabilities
Conclusion
Discontinuity stories
One byproduct of having looked at a big bag of technologies which appear to show a discontinuity is that I can outline some mechanisms or stories by which they happen. Here is a brief list:
Sharp pioneers focus on and solve problem (Wright brothers, Gutenberg, Marconi, etc. )
Using a methodological approach and concentrating on the controllability of the aircraft, the brothers built and tested a series of kite and glider designs from 1900 to 1902 before attempting to build a powered design. The gliders worked, but not as well as the Wrights had expected based on the experiments and writings of their 19th-century predecessors. Their first glider, launched in 1900, had only about half the lift they anticipated. Their second glider, built the following year, performed even more poorly. Rather than giving up, the Wrights constructed their own wind tunnel and created a number of sophisticated devices to measure lift and drag on the 200 wing designs they tested. As a result, the Wrights corrected earlier mistakes in calculations regarding drag and lift. Their testing and calculating produced a third glider with a higher aspect ratio and true three-axis control. They flew it successfully hundreds of times in 1902, and it performed far better than the previous models. By using a rigorous system of experimentation, involving wind-tunnel testing of airfoils and flight testing of full-size prototypes, the Wrights not only built a working aircraft, the Wright Flyer, but also helped advance the science of aeronautical engineering.
Conflict (and perhaps massive state funding) catalyzes project (radar, nuclear weapons, Bessemer process, space race, rockets)
Serendipity; inventors stumble upon a discovery (radio telescopy, perhaps polymerase chain reaction, purportedly Carl Frosch and Lincoln Derick’s discovery of surface passivation). Purportedly, penicillin (which is not in my dataset) was also discovered by accident. One might choose to doubt this category because a fortuitous discovery makes for a nicer story.
Industrial revolution makes something much cheaper/viable/profitable (furniture, glass, petroleum, candles). A technology of particular interest is the centrifugal governor and other tools in the history of automation, which made other technologies undergo a discontinuity in terms of price. For example:
The logic performed by telephone switching relays was the inspiration for the digital computer. The first commercially successful glass bottle blowing machine was an automatic model introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers.
Perfection is reached (one time pad, Persian calendar which doesn’t require leap days)
Exploring the space of possibilities leads to overdue invention (bicycle). Another example here, which wasn’t on my dataset, is luggage with wheels, invented in 1970.
Civilization decides to solve long standing problem (sanitation after the Great Stink of London, space race)
New chemical or physical processes are mastered (Bessemer process, activated sludge, Hall–Héroult process, polymerase chain reaction, nuclear weapons)
Small tweak has qualitative implications. (Hale rockets: spinning makes rockets more accurate/less likely to veer).
Change in context makes technology more viable (much easier to print European rather than Chinese characters)
The general assumption is that movable type did not replace block printing in places that used Chinese characters due to the expense of producing more than 200,000 individual pieces of type. Even woodblock printing was not as cost productive as simply paying a copyist to write out a book by hand if there was no intention of producing more than a few copies.
Continuous progress encounters discrete outcomes. Military technology might increase continuously or with jumps, but sometimes we care about a discrete outcome, such as “will it defeat the British” (cryptography, rockets, radar, radio, submarines, aviation). A less bellicose example would be “will this defeat the world champion at go/chess/starcraft/poker/…” AI Impacts also mentions a discontinuity in the “time to cross the Atlantic”, and has some more stories here
More elaborate models for computing the base rate of big discontinuities.
AI Impacts states: “32% of trends we investigated saw at least one large, robust discontinuity”. If I take my 12 out of 50 “big” discontinuities and assume that one third would be found to be “large and robust” by a more thorough investigation, one would expect that 4 out of the 50 technologies will display a “large and robust discontinuity” in the sense which AI Impacts takes those words to mean. However, I happen to think that the “robust” here is doing too much work filtering out discontinuities which probably existed but for which good data may not exist or be ambiguous. For example, they don’t classify the fall in book prices after the European printing press as a “large and robust” discontinuity (!).
I can also compute the average time since the first mention of a technology until the first big discontinuity. This gives 1055 years, or roughly 0.001 per year, very much like AI Impact’s numbers (also 0.001). But this is too high, because printing, aluminium, aviation, etc. have millenarian histories. The earliest discontinuity in my database is printing in 1450, and the next one after that the petroleum industry in 1850, which suggests that there was a period in which discontinuities were uncommon.
If we ignore printing and instead compute the average time since either the start of the Industrial Revolution, defined to be 1750, or the start of the given technology (e.g., phenomena akin to radar started to be investigated in 1887), then the average time until the first discontinuity is 88 years, i.e., roughly 0.01 per year.
Can we really take the average time until a discontinuity and translate it to a yearly probability, like 1% per year? Not without caveats; we’d also have to consider hypotheses like whether there is a minimum wait time from the invention until a discontinuity, whether there are different regimes (e.g., in the same generation as the inventor, or one or more generations afterwards, etc.). The wait times since either 1750 or the beginning of a technology are {13, 31, 32, 47, 65, 92, 100, 136, 138, 152, 163}.
Adjustment for AI
So I have a rough prior that ~10% of technologies (4 out of 50) undergo a “large robust discontinuity”, and if they do so, I’d give a ~1% chance per year, for an unconditioned ~0.1% per year. But this is a prior from which to begin, and I have more information and inside views about AI, perhaps because GPT-3 was maybe a discontinuity for language models.
With that in mind, I might adjust to something like 30% that AI will undergo a “large and robust” discontinuity, at the rate of maybe 2% per year if it does so. I’m not doing this in a principled way, but rather drawing on past forecasting experience, and I’d expect that this estimate might change substantially if I put some more thought into it. One might also argue that these probabilities would only apply while humans are the ones doing the research.
Conclusion
I have given some rough estimates of the probability that a given technology’s progress will display a discontinuity. For example, I arrive at ~10% chance that a technology will display a “large and robust” discontinuity within its lifetime, and maybe at a ~1% chance of a discontinuity per year if it does so. For other operationalizations, the qualitative conclusion that discontinuities are not uncommon still holds.
One might also carry out essentially the same project but taking technologies from Computing Timelines and History of Technology, and then produce a prior based on the history of computing so far. I’d also be curious to see discussion of the probability of a discontinuity in AI in the next two to five years among forecasters, in the spirit of this AI timelines forecasting thread.
A prior for technological discontinuities
Introduction
I looked at 50 technologies taken from a Wikipedia list History of technology, which I expect to provide a mostly random list of technologies. Of these 50 technologies, I think that 19 have a discontinuity, 13 might have one, and 18 probably don’t. Of these, I’d call 12 “big” discontinuities, for an initial probability estimate of 12/50=24% I provide other estimates in the “More elaborate models for computing the base rate of big discontinuities.”
Unlike some previous work by AI Impacts (or, for that matter, by myself), I am able to produce something which looks like a prior because I consider a broad bag of different technologies, and then ask which proportion have discontinuities. Previous approaches have specifically looked for discontinuities and found examples, thereby not being able to estimate their prevalence.
The broad bag of technologies I draw from was produced by Wikipedia editors who followed their own designs. They most likely weren’t thinking in terms of discontinuities, and won’t have selected for them. However, these editors might still have been subject to availability bias, Anglicism bias, etc. This might make the dataset mildly imperfect, that is, not completely representative of all possible technologies, but I’d say that most likely it’s still good enough.
Furthermore, I didn’t limit myself to discontinuities which are easily quantifiable or for which data is relatively easy to gather; instead I quickly familiarized myself with each technology in my list, mostly by reading the Wikipedia entry, and used my best judgement as to whether there was a discontinuity. This method is less rigorous than previous work, but doesn’t fall prey to Goodhart’s law: I want a prior for all discontinuities, not only for the quantifiable ones, or for the ones for which there is numerical data.
However, this method does give greater weight to my own subjective judgment. In particular, I suspect that I, being a person with an interest in technological discontinuities, might produce a higher rate of false positives. One could dilute this effect by pooling many people’s assessments, like in Assessing Kurzweil’s predictions for 2019.
All data is freely available here. While gathering it, I came across some somewhat interesting anecdotes, some of which are gathered in this shortform.
Many thanks to Misha Yagudin, Gavin Leech and Jaime Sevilla for feedback on this document.
Table of contents
Introduction
Discontinuity stories
More elaborate probabilities
Conclusion
Discontinuity stories
One byproduct of having looked at a big bag of technologies which appear to show a discontinuity is that I can outline some mechanisms or stories by which they happen. Here is a brief list:
Sharp pioneers focus on and solve problem (Wright brothers, Gutenberg, Marconi, etc. )
Conflict (and perhaps massive state funding) catalyzes project (radar, nuclear weapons, Bessemer process, space race, rockets)
Serendipity; inventors stumble upon a discovery (radio telescopy, perhaps polymerase chain reaction, purportedly Carl Frosch and Lincoln Derick’s discovery of surface passivation). Purportedly, penicillin (which is not in my dataset) was also discovered by accident. One might choose to doubt this category because a fortuitous discovery makes for a nicer story.
Industrial revolution makes something much cheaper/viable/profitable (furniture, glass, petroleum, candles). A technology of particular interest is the centrifugal governor and other tools in the history of automation, which made other technologies undergo a discontinuity in terms of price. For example:
Perfection is reached (one time pad, Persian calendar which doesn’t require leap days)
Exploring the space of possibilities leads to overdue invention (bicycle). Another example here, which wasn’t on my dataset, is luggage with wheels, invented in 1970.
Civilization decides to solve long standing problem (sanitation after the Great Stink of London, space race)
New chemical or physical processes are mastered (Bessemer process, activated sludge, Hall–Héroult process, polymerase chain reaction, nuclear weapons)
Small tweak has qualitative implications. (Hale rockets: spinning makes rockets more accurate/less likely to veer).
Change in context makes technology more viable (much easier to print European rather than Chinese characters)
Continuous progress encounters discrete outcomes. Military technology might increase continuously or with jumps, but sometimes we care about a discrete outcome, such as “will it defeat the British” (cryptography, rockets, radar, radio, submarines, aviation). A less bellicose example would be “will this defeat the world champion at go/chess/starcraft/poker/…” AI Impacts also mentions a discontinuity in the “time to cross the Atlantic”, and has some more stories here
More elaborate models for computing the base rate of big discontinuities.
AI Impacts states: “32% of trends we investigated saw at least one large, robust discontinuity”. If I take my 12 out of 50 “big” discontinuities and assume that one third would be found to be “large and robust” by a more thorough investigation, one would expect that 4 out of the 50 technologies will display a “large and robust discontinuity” in the sense which AI Impacts takes those words to mean. However, I happen to think that the “robust” here is doing too much work filtering out discontinuities which probably existed but for which good data may not exist or be ambiguous. For example, they don’t classify the fall in book prices after the European printing press as a “large and robust” discontinuity (!).
I can also compute the average time since the first mention of a technology until the first big discontinuity. This gives 1055 years, or roughly 0.001 per year, very much like AI Impact’s numbers (also 0.001). But this is too high, because printing, aluminium, aviation, etc. have millenarian histories. The earliest discontinuity in my database is printing in 1450, and the next one after that the petroleum industry in 1850, which suggests that there was a period in which discontinuities were uncommon.
If we ignore printing and instead compute the average time since either the start of the Industrial Revolution, defined to be 1750, or the start of the given technology (e.g., phenomena akin to radar started to be investigated in 1887), then the average time until the first discontinuity is 88 years, i.e., roughly 0.01 per year.
Can we really take the average time until a discontinuity and translate it to a yearly probability, like 1% per year? Not without caveats; we’d also have to consider hypotheses like whether there is a minimum wait time from the invention until a discontinuity, whether there are different regimes (e.g., in the same generation as the inventor, or one or more generations afterwards, etc.). The wait times since either 1750 or the beginning of a technology are {13, 31, 32, 47, 65, 92, 100, 136, 138, 152, 163}.
Adjustment for AI
So I have a rough prior that ~10% of technologies (4 out of 50) undergo a “large robust discontinuity”, and if they do so, I’d give a ~1% chance per year, for an unconditioned ~0.1% per year. But this is a prior from which to begin, and I have more information and inside views about AI, perhaps because GPT-3 was maybe a discontinuity for language models.
With that in mind, I might adjust to something like 30% that AI will undergo a “large and robust” discontinuity, at the rate of maybe 2% per year if it does so. I’m not doing this in a principled way, but rather drawing on past forecasting experience, and I’d expect that this estimate might change substantially if I put some more thought into it. One might also argue that these probabilities would only apply while humans are the ones doing the research.
Conclusion
I have given some rough estimates of the probability that a given technology’s progress will display a discontinuity. For example, I arrive at ~10% chance that a technology will display a “large and robust” discontinuity within its lifetime, and maybe at a ~1% chance of a discontinuity per year if it does so. For other operationalizations, the qualitative conclusion that discontinuities are not uncommon still holds.
One might also carry out essentially the same project but taking technologies from Computing Timelines and History of Technology, and then produce a prior based on the history of computing so far. I’d also be curious to see discussion of the probability of a discontinuity in AI in the next two to five years among forecasters, in the spirit of this AI timelines forecasting thread.