SIAI’s goal is not to be the ones to implement the first superintelligence, but just to make sure that the first one is Friendly.
That wasn’t true not terribly long ago:
“The Singularity Institute was founded on the theory that in order to get a Friendly artificial intelligence, someone has got to build one. So, we’re just going to have an organization whose mission is: build a Friendly AI. That’s us.”
That wasn’t true not terribly long ago:
“The Singularity Institute was founded on the theory that in order to get a Friendly artificial intelligence, someone has got to build one. So, we’re just going to have an organization whose mission is: build a Friendly AI. That’s us.”
http://www.acceleratingfuture.com/people-blog/?p=196
Has there been a memo?