I suggest that rather than putting “AI is possible” and “exponential growth of research will continue” in as assumptions, it would be better to adjust the conclusion: 95% probability that by 2035 the exponential growth of human AI research will have stopped. This could be (1) because it produced a strongly superhuman AI and declared its job complete, or (2) because we found good reason to believe that AI is actually impossible, or (3) because we found other more exciting things to work on, or (4) because there weren’t enough resources to keep the exponential growth going, or (etc.).
I think this framing is better because it emphasizes that there are lots of ways for exponential growth in AI research to stop [EDITED to add: or to slow substantially] other than achieving all the goals of such research.
(Don’t mind the tiny value for 2016, these are papers that are due to be published next year and obviously that year’s data has not been collected yet!)
If I had to guess, I’d say we’ve already reached the limit of diminishing returns when it comes to the ratio of amount of material you have to learn / amount you can contribute. Research is hard.
Yes, but we need to add “Humanity goes extinct before this date” which is also possible. (((
Sufficiently large catastrophe could prevent AI creation, like supervirus or nuclear war.
That would be another way for exponential growth in human AI research to stop, yes. You can think of it as one of the options under “(etc.)”, or as a special case of “not enough resources”.
I suggest that rather than putting “AI is possible” and “exponential growth of research will continue” in as assumptions, it would be better to adjust the conclusion: 95% probability that by 2035 the exponential growth of human AI research will have stopped. This could be (1) because it produced a strongly superhuman AI and declared its job complete, or (2) because we found good reason to believe that AI is actually impossible, or (3) because we found other more exciting things to work on, or (4) because there weren’t enough resources to keep the exponential growth going, or (etc.).
I think this framing is better because it emphasizes that there are lots of ways for exponential growth in AI research to stop [EDITED to add: or to slow substantially] other than achieving all the goals of such research.
The exponential growth of ML research may already be decreasing.
Here’s the number of paper hits from the keyword “machine learning”:
http://i.imgur.com/jezwBhV.png
And here’s the number of paper hits from the keyword “pattern recognition”:
http://i.imgur.com/Sor5seJ.png
(Don’t mind the tiny value for 2016, these are papers that are due to be published next year and obviously that year’s data has not been collected yet!)
Source: scopus, plotted with Gadfly
If I had to guess, I’d say we’ve already reached the limit of diminishing returns when it comes to the ratio of amount of material you have to learn / amount you can contribute. Research is hard.
This is interesting, but I wonder how much of it is just shifts in naming.
What does the graph for say deep learning look like, or neural nets?
I don’t know. You can plot the data for yourself.
Yes, but we need to add “Humanity goes extinct before this date” which is also possible. ((( Sufficiently large catastrophe could prevent AI creation, like supervirus or nuclear war.
That would be another way for exponential growth in human AI research to stop, yes. You can think of it as one of the options under “(etc.)”, or as a special case of “not enough resources”.