I can try to summarize some relevant points for you, but you should know that you’re being somewhat intellectually rude by not familiarizing yourself with what’s already been said.
1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there’s a “shortcut” to intelligence, we won’t be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this “AI” would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.
It’s common in computer science for some algorithms to be radically more efficient than others for accomplishing the same task. Thinking algorithms may be the same way. Evolution moves incrementally and it’s likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn’t happen to discover for whatever reason. For example, even given the massive amount of computational power at our brain’s disposal, it takes us on the order of minutes to do relatively trivial computations like 3967274 * 18574819. And the sort of thinking that we associate with technological progress is pushing at the boundaries of what our brains are designed for. Most humans aren’t capable of making technological breakthroughs and the ones who are capable of technological breakthroughs have to work hard at it. So it’s possible that you could have an AGI that could do things like hack computers and discover physics way better and faster than humans using much less computational power.
2) Being able to read your own source code does not mean you can self-modify. You know that you’re made of DNA. You can even get your own “source code” for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.
In programming, I think it’s often useful to think in terms of a “debugging cycle”… once you think you know how to fix a bug, how long does it take you to verify that your fix is going to work? This is a critical input in to your productivity as a programmer. The debugging cycle for DNA is very long; it would take on the order of years in order to see if flipping a few base pairs resulted in a more intelligent human. The debugging cycle for software is often much shorter. Compiling an executable is much quicker than raising a child.
Also, DNA is really bad source code—even though we’ve managed to get ahold of it, biologists have found it to be almost completely unreadable :) Reading human-designed computer code is way easier than reading DNA for humans, and most likely also for computers.
3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it’s source code to make it smarter, that doesn’t automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There’s no guarantee that “how smart the AI is” will keep up with “how hard it is to think of ways to make the AI smarter”; to me, it seems very unlikely.
This is your best objection, in my opinion; it’s also something I discussed in my essay on this topic. I think it’s hard to say much one way or the other. In general, I think people are too certain about whether AI will foom or not.
I’m also skeptical that foom will happen, but I don’t think arguments 1 or 2 are especially strong.
Evolution moves incrementally and it’s likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn’t happen to discover for whatever reason.
Maybe, but that doesn’t mean we can find them. Brain emulation and machine learning seem like the most viable approaches, and they both require tons of distributed computing power.
Lots has already been said on this topic, e.g. at http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate
I can try to summarize some relevant points for you, but you should know that you’re being somewhat intellectually rude by not familiarizing yourself with what’s already been said.
It’s common in computer science for some algorithms to be radically more efficient than others for accomplishing the same task. Thinking algorithms may be the same way. Evolution moves incrementally and it’s likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn’t happen to discover for whatever reason. For example, even given the massive amount of computational power at our brain’s disposal, it takes us on the order of minutes to do relatively trivial computations like 3967274 * 18574819. And the sort of thinking that we associate with technological progress is pushing at the boundaries of what our brains are designed for. Most humans aren’t capable of making technological breakthroughs and the ones who are capable of technological breakthroughs have to work hard at it. So it’s possible that you could have an AGI that could do things like hack computers and discover physics way better and faster than humans using much less computational power.
In programming, I think it’s often useful to think in terms of a “debugging cycle”… once you think you know how to fix a bug, how long does it take you to verify that your fix is going to work? This is a critical input in to your productivity as a programmer. The debugging cycle for DNA is very long; it would take on the order of years in order to see if flipping a few base pairs resulted in a more intelligent human. The debugging cycle for software is often much shorter. Compiling an executable is much quicker than raising a child.
Also, DNA is really bad source code—even though we’ve managed to get ahold of it, biologists have found it to be almost completely unreadable :) Reading human-designed computer code is way easier than reading DNA for humans, and most likely also for computers.
This is your best objection, in my opinion; it’s also something I discussed in my essay on this topic. I think it’s hard to say much one way or the other. In general, I think people are too certain about whether AI will foom or not.
I’m also skeptical that foom will happen, but I don’t think arguments 1 or 2 are especially strong.
Maybe, but that doesn’t mean we can find them. Brain emulation and machine learning seem like the most viable approaches, and they both require tons of distributed computing power.