To deal with people making that claim more easily, I’d like to see a post by you or someone else involved with SIAI summarizing the evidence for existential risks from AI, including the arguments for a hard takeoff and for why the AI’s goals must hit a narrow target of Friendliness.
To deal with people making that claim more easily, I’d like to see a post by you or someone else involved with SIAI summarizing the evidence for existential risks from AI, including the arguments for a hard takeoff and for why the AI’s goals must hit a narrow target of Friendliness.