Designing artificial intelligences is a skill composed of many sub-skills of varying levels of difficulty.
Machines are better at many of those today.
To say “one day” creates the bizarre and totally incorrect idea that machine skill will overtake human skill on the specified task on one day—and after that things will go much faster. What is actually happening is that machine skill has been gradually overtaking human skill in a range of domains, one domain at a time. The process has been going on for many decades now. Composite tasks that involve many skills—like designing complex machines by mastering hardware and software engineering—will thus be accelerated gradually by the increase in performance of machines.
This appears to be a confusion propagated by those wanting to exaggerate the risks of machine intelligence. I have a web page about this and must have told people about this publicly a dozen times now, but few seem to listen, and consequently the same nonsense gets repeatedly pumped out, deluding wave after wave of newcomers about this point. Perhaps brevity excuses this instance, but it is beginning to look like a deliberate deception—to try and make things seem worse than they appear, by making the transition to intelligent machines seem more sudden than is likely.
This video shows the position that I am arguing against (at 18:43).
It might make good propaganda, but it is based on an inacurate picture of what it likely to happen. We know enough to see that already.
I agree with almost all of this article. However, the conclusion that the transition won’t happen quite suddenly seems to me to be wrong.
Many things seem to go through a technological stage of progress. Music, for example. It became possible to record it tunefully in the 1900′s, and by 1960, recording had essentially been mastered to the fidelity the human ear could hear.
It became possible to create electronic sounds to some extent in the 1950′s, and this led to the electric guitar, the Hammond organ, and various analogue synthesizers. Then came digital synthesizers, which over a relatively short time displaced the analogues, and led to a point in the 1990′s when it became possible to create any sound. Now your phone is powerful enough to do this.
In the 1970′s simple digital light detectors existed. These became consumer digital cameras in the late 1990′s. Now we have basically reached a level where pixels are no longer an issue, and prices have dropped immensely.
Digital flat screens were science fiction for decades, and comparatively suddenly became possible, then expensively affordable, then cheaper than CRT’s.
In each case there’s a longish incubation period where nothing much apparently changes for some years. Then there’s a rush of progress over little more than a couple of decades, leading to a new status quo where the old technology is completely displaced.
AI is starting to stir. It’s had a long period where initial success was replaced by apparent stasis for some time. But now we are seeing real progress again, and I suspect a period of disruptive change caused by AI technologies is not that far off.
To make a prediction here—we will go from having essentially useless AI to human level AI in around a decade or two—just as we have seen with digital cameras, displays, synths etc. The biggest uncertainty in this is which decade it will be. And the machines won’t stop at human level—they will drive straight through and keep going over about a 5 year period. And it’s only after that has happened that progress may start speeding up because of it.
In each case there’s a longish incubation period where nothing much apparently changes for some years. Then there’s a rush of progress over little more than a couple of decades, leading to a new status quo where the old technology is completely displaced.
Uh, to me that looks like 4 examples of gradual progress, and 0 examples of explosions (that is, none that are more like fooms than gradual curves).
How are you defining “explosion” though? A plot of the number of splitting nuclei per unit time in a recently-detonated nuclear bomb looks like a gradual curve—if viewed on an appropriate timescale...
I suspect a period of disruptive change caused by AI technologies is not that far off.
To make a prediction here—we will go from having essentially useless AI to human level AI in around a decade or two—just as we have seen with digital cameras, displays, synths etc. The biggest uncertainty in this is which decade it will be. And the machines won’t stop at human level—they will drive straight through and keep going over about a 5 year period. And it’s only after that has happened that progress may start speeding up because of it.
It doesn’t sound as though we disagree too much. I expect progress on billion year timescales, though it won’t be so dramatic after a while. I’m not arguing for low levels of disruption—but I don’t think that systematically exaggerating the expected level of disruption is particularly helpful.
Once a transparently constructed AGI becomes a good programmer it can improve itself directly. A tight feedback loop like this is rather different from the rest of the progress in AI so far.
Sure—though before machine programmers can automatically program other machine programmers they will be able to automatically program sort routines, test routines, search routines, perform refactoring, compile code, check code, find bugs, fix bugs—and so on. Those things speed up development too. The autocatalytic aspect of this is not going to start at some point in the future. It started decades ago—centuries ago if you trace the phenomenon to its roots a bit more enthusiastically.
http://intelligenceexplosion.com/ paints a pretty naive picture.
Designing artificial intelligences is a skill composed of many sub-skills of varying levels of difficulty.
Machines are better at many of those today.
To say “one day” creates the bizarre and totally incorrect idea that machine skill will overtake human skill on the specified task on one day—and after that things will go much faster. What is actually happening is that machine skill has been gradually overtaking human skill in a range of domains, one domain at a time. The process has been going on for many decades now. Composite tasks that involve many skills—like designing complex machines by mastering hardware and software engineering—will thus be accelerated gradually by the increase in performance of machines.
This appears to be a confusion propagated by those wanting to exaggerate the risks of machine intelligence. I have a web page about this and must have told people about this publicly a dozen times now, but few seem to listen, and consequently the same nonsense gets repeatedly pumped out, deluding wave after wave of newcomers about this point. Perhaps brevity excuses this instance, but it is beginning to look like a deliberate deception—to try and make things seem worse than they appear, by making the transition to intelligent machines seem more sudden than is likely.
This video shows the position that I am arguing against (at 18:43).
It might make good propaganda, but it is based on an inacurate picture of what it likely to happen. We know enough to see that already.
I agree with almost all of this article. However, the conclusion that the transition won’t happen quite suddenly seems to me to be wrong.
Many things seem to go through a technological stage of progress. Music, for example. It became possible to record it tunefully in the 1900′s, and by 1960, recording had essentially been mastered to the fidelity the human ear could hear.
It became possible to create electronic sounds to some extent in the 1950′s, and this led to the electric guitar, the Hammond organ, and various analogue synthesizers. Then came digital synthesizers, which over a relatively short time displaced the analogues, and led to a point in the 1990′s when it became possible to create any sound. Now your phone is powerful enough to do this.
In the 1970′s simple digital light detectors existed. These became consumer digital cameras in the late 1990′s. Now we have basically reached a level where pixels are no longer an issue, and prices have dropped immensely.
Digital flat screens were science fiction for decades, and comparatively suddenly became possible, then expensively affordable, then cheaper than CRT’s.
In each case there’s a longish incubation period where nothing much apparently changes for some years. Then there’s a rush of progress over little more than a couple of decades, leading to a new status quo where the old technology is completely displaced.
AI is starting to stir. It’s had a long period where initial success was replaced by apparent stasis for some time. But now we are seeing real progress again, and I suspect a period of disruptive change caused by AI technologies is not that far off.
To make a prediction here—we will go from having essentially useless AI to human level AI in around a decade or two—just as we have seen with digital cameras, displays, synths etc. The biggest uncertainty in this is which decade it will be. And the machines won’t stop at human level—they will drive straight through and keep going over about a 5 year period. And it’s only after that has happened that progress may start speeding up because of it.
Uh, to me that looks like 4 examples of gradual progress, and 0 examples of explosions (that is, none that are more like fooms than gradual curves).
How are you defining “explosion” though? A plot of the number of splitting nuclei per unit time in a recently-detonated nuclear bomb looks like a gradual curve—if viewed on an appropriate timescale...
It doesn’t sound as though we disagree too much. I expect progress on billion year timescales, though it won’t be so dramatic after a while. I’m not arguing for low levels of disruption—but I don’t think that systematically exaggerating the expected level of disruption is particularly helpful.
Once a transparently constructed AGI becomes a good programmer it can improve itself directly. A tight feedback loop like this is rather different from the rest of the progress in AI so far.
Sure—though before machine programmers can automatically program other machine programmers they will be able to automatically program sort routines, test routines, search routines, perform refactoring, compile code, check code, find bugs, fix bugs—and so on. Those things speed up development too. The autocatalytic aspect of this is not going to start at some point in the future. It started decades ago—centuries ago if you trace the phenomenon to its roots a bit more enthusiastically.