There’s our solution to scope insensitivity about existential risks. “If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won’t somebody please think of the Steves?”
There’s our solution to scope insensitivity about existential risks. “If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won’t somebody please think of the Steves?”