I tried to explain it in my recent post, that on current level of technologies human level AGI is possible, but foom is not yet, in particular, because some problems with size, speed and the way neural nets are learning.
It is not foom, but in 10-20 years it results will be superinteligence. I am now writing a post that will give more details about how I see it—the main idea will be that AI speed improvement will be at hyperbolic law, but it will evolve as a whole environment, not a single fooming AI agent.
I tried to explain it in my recent post, that on current level of technologies human level AGI is possible, but foom is not yet, in particular, because some problems with size, speed and the way neural nets are learning.
Also human level AGI is not powerful enough to foam. Human science is developing but in includes millions of scientists; foaming AI should be of the same complexity but run 1000 times quicker. We don’t have such hardware. http://lesswrong.com/lw/n8z/ai_safety_in_the_age_of_neural_networks_and/
But the field of AI research is foaming with doubling time 1 year now.
foom, not foam, right?
Doubling time of 1 year is not a FOOM. But thank you for taking the time to write up a post on AI safety pulling from modern AI research.
It is not foom, but in 10-20 years it results will be superinteligence. I am now writing a post that will give more details about how I see it—the main idea will be that AI speed improvement will be at hyperbolic law, but it will evolve as a whole environment, not a single fooming AI agent.