“In real life, I’d expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world.”
If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.
Also, if you really believe this shouldn’t you want the CIA to start assassinating AI programmers?
“In real life, I’d expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world.”
If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.
Also, if you really believe this shouldn’t you want the CIA to start assassinating AI programmers?