Eliezer Yudkowsky can make AIs Friendly by glaring at them.
And the first action of any Friendly AI will be to create a nonprofit institute to develop a rigorous theory of Eliezer Yudkowsky. Unfortunately, it will turn out to be an intractable problem.
And the first action of any Friendly AI will be to create a nonprofit institute to develop a rigorous theory of Eliezer Yudkowsky. Unfortunately, it will turn out to be an intractable problem.
Transhuman AIs theorize that if they could create Eliezer Yudkowsky, it would lead to an “intelligence explosion”.