>It took 17 years to go from perceptrons to back propagation…
>… therefore I have moldy Jell-O in my skull for saying we won’t go from manually debugging buffer overruns to superintelligent AI within 30 years…
If you’d asked me in 1995 how many people it would take for the world to develop a fast, distributed system for moving films and TV episodes to people’s homes on an ‘when you want it, how you want it’ basis, internationally, without ads, I’d have said hundreds of thousands. In practice it took one guy with the right algorithm, depending on whether you pick napster or bittorrent as the magic that solves the problem without the need for any new physical technologies.
The thing about self-improving AI, is that we only need to get the algorithm right (or wrong :-() once.
We know with probability 1 it’s possible to create self-improving intelligence. After all, that’s what most humans are. No doubt other solutions exist. If we can find an algorithm or heuristic to implement any one of these solutions, or if we can even find any predecessor of any one of them, then we’re off—and given the right approach (be that algorithm , machine, heuristic, or whatever) it should be simply a matter of throwing computer power (or moore’s law) at it to speed up the rate of self-improvement. Heck, for all I know it could be a giant genetically engineered brain in a jar that cracks the problem.
Put it this way. Imagine you are a parasite. For x billion years you’re happy, then some organism comes up with sexual reproduction and suddenly it’s a nightmare. But eventually you catch up again. Then suddenly, in just 100 years, human society basically eradicates you completely out of the blue. The first 50 years of that century are bad. The next 20 are hideous. The next 10 are awful. The next 5 are disastrous… etc.
Similarly useful powerplant-scale nuclear fusion has always been 30 years away. But at some point, I suspect it will suddenly be only 2 years away, completely out of the blue....
It took 17 years to go from perceptrons to back propagation...
… therefore I have moldy Jell-O in my skull for saying we won’t go from manually debugging buffer overruns to superintelligent AI within 30 years...
I think EY is failing to take into account the exponential growth of AI researchers, their access to information, their ability to communicate, and the computation and algorithmic power they have at their disposal today.
I don’t think the solution to a similar problem would take 17 years today.
Of course, a superintelligent AI is a harder problem than back propagation, and I doubt that it’s comparable problem anyway. I don’t expect some equation tweaking a singular known algorithm to do the trick. I suspect it’s more of a systems integration problem. Brains are complex systems of functional units which have evolved organically over time.
If you’d asked me in 1995 how many people it would take for the world to develop a fast, distributed system for moving films and TV episodes to people’s homes on an ‘when you want it, how you want it’ basis, internationally, without ads, I’d have said hundreds of thousands. In practice it took one guy with the right algorithm, depending on whether you pick napster or bittorrent as the magic that solves the problem without the need for any new physical technologies.
The thing about self-improving AI, is that we only need to get the algorithm right (or wrong :-() once.
We know with probability 1 it’s possible to create self-improving intelligence. After all, that’s what most humans are. No doubt other solutions exist. If we can find an algorithm or heuristic to implement any one of these solutions, or if we can even find any predecessor of any one of them, then we’re off—and given the right approach (be that algorithm , machine, heuristic, or whatever) it should be simply a matter of throwing computer power (or moore’s law) at it to speed up the rate of self-improvement. Heck, for all I know it could be a giant genetically engineered brain in a jar that cracks the problem.
Put it this way. Imagine you are a parasite. For x billion years you’re happy, then some organism comes up with sexual reproduction and suddenly it’s a nightmare. But eventually you catch up again. Then suddenly, in just 100 years, human society basically eradicates you completely out of the blue. The first 50 years of that century are bad. The next 20 are hideous. The next 10 are awful. The next 5 are disastrous… etc.
Similarly useful powerplant-scale nuclear fusion has always been 30 years away. But at some point, I suspect it will suddenly be only 2 years away, completely out of the blue....
I think EY is failing to take into account the exponential growth of AI researchers, their access to information, their ability to communicate, and the computation and algorithmic power they have at their disposal today.
I don’t think the solution to a similar problem would take 17 years today.
Of course, a superintelligent AI is a harder problem than back propagation, and I doubt that it’s comparable problem anyway. I don’t expect some equation tweaking a singular known algorithm to do the trick. I suspect it’s more of a systems integration problem. Brains are complex systems of functional units which have evolved organically over time.