I agree with almost all of this article. However, the conclusion that the transition won’t happen quite suddenly seems to me to be wrong.
Many things seem to go through a technological stage of progress. Music, for example. It became possible to record it tunefully in the 1900′s, and by 1960, recording had essentially been mastered to the fidelity the human ear could hear.
It became possible to create electronic sounds to some extent in the 1950′s, and this led to the electric guitar, the Hammond organ, and various analogue synthesizers. Then came digital synthesizers, which over a relatively short time displaced the analogues, and led to a point in the 1990′s when it became possible to create any sound. Now your phone is powerful enough to do this.
In the 1970′s simple digital light detectors existed. These became consumer digital cameras in the late 1990′s. Now we have basically reached a level where pixels are no longer an issue, and prices have dropped immensely.
Digital flat screens were science fiction for decades, and comparatively suddenly became possible, then expensively affordable, then cheaper than CRT’s.
In each case there’s a longish incubation period where nothing much apparently changes for some years. Then there’s a rush of progress over little more than a couple of decades, leading to a new status quo where the old technology is completely displaced.
AI is starting to stir. It’s had a long period where initial success was replaced by apparent stasis for some time. But now we are seeing real progress again, and I suspect a period of disruptive change caused by AI technologies is not that far off.
To make a prediction here—we will go from having essentially useless AI to human level AI in around a decade or two—just as we have seen with digital cameras, displays, synths etc. The biggest uncertainty in this is which decade it will be. And the machines won’t stop at human level—they will drive straight through and keep going over about a 5 year period. And it’s only after that has happened that progress may start speeding up because of it.
In each case there’s a longish incubation period where nothing much apparently changes for some years. Then there’s a rush of progress over little more than a couple of decades, leading to a new status quo where the old technology is completely displaced.
Uh, to me that looks like 4 examples of gradual progress, and 0 examples of explosions (that is, none that are more like fooms than gradual curves).
How are you defining “explosion” though? A plot of the number of splitting nuclei per unit time in a recently-detonated nuclear bomb looks like a gradual curve—if viewed on an appropriate timescale...
I suspect a period of disruptive change caused by AI technologies is not that far off.
To make a prediction here—we will go from having essentially useless AI to human level AI in around a decade or two—just as we have seen with digital cameras, displays, synths etc. The biggest uncertainty in this is which decade it will be. And the machines won’t stop at human level—they will drive straight through and keep going over about a 5 year period. And it’s only after that has happened that progress may start speeding up because of it.
It doesn’t sound as though we disagree too much. I expect progress on billion year timescales, though it won’t be so dramatic after a while. I’m not arguing for low levels of disruption—but I don’t think that systematically exaggerating the expected level of disruption is particularly helpful.
I agree with almost all of this article. However, the conclusion that the transition won’t happen quite suddenly seems to me to be wrong.
Many things seem to go through a technological stage of progress. Music, for example. It became possible to record it tunefully in the 1900′s, and by 1960, recording had essentially been mastered to the fidelity the human ear could hear.
It became possible to create electronic sounds to some extent in the 1950′s, and this led to the electric guitar, the Hammond organ, and various analogue synthesizers. Then came digital synthesizers, which over a relatively short time displaced the analogues, and led to a point in the 1990′s when it became possible to create any sound. Now your phone is powerful enough to do this.
In the 1970′s simple digital light detectors existed. These became consumer digital cameras in the late 1990′s. Now we have basically reached a level where pixels are no longer an issue, and prices have dropped immensely.
Digital flat screens were science fiction for decades, and comparatively suddenly became possible, then expensively affordable, then cheaper than CRT’s.
In each case there’s a longish incubation period where nothing much apparently changes for some years. Then there’s a rush of progress over little more than a couple of decades, leading to a new status quo where the old technology is completely displaced.
AI is starting to stir. It’s had a long period where initial success was replaced by apparent stasis for some time. But now we are seeing real progress again, and I suspect a period of disruptive change caused by AI technologies is not that far off.
To make a prediction here—we will go from having essentially useless AI to human level AI in around a decade or two—just as we have seen with digital cameras, displays, synths etc. The biggest uncertainty in this is which decade it will be. And the machines won’t stop at human level—they will drive straight through and keep going over about a 5 year period. And it’s only after that has happened that progress may start speeding up because of it.
Uh, to me that looks like 4 examples of gradual progress, and 0 examples of explosions (that is, none that are more like fooms than gradual curves).
How are you defining “explosion” though? A plot of the number of splitting nuclei per unit time in a recently-detonated nuclear bomb looks like a gradual curve—if viewed on an appropriate timescale...
It doesn’t sound as though we disagree too much. I expect progress on billion year timescales, though it won’t be so dramatic after a while. I’m not arguing for low levels of disruption—but I don’t think that systematically exaggerating the expected level of disruption is particularly helpful.