A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. ‘Friendly AI’ is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people’s tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don’t fully persuade them on the first go, there’s a case to be made for improving people’s epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.
A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. ‘Friendly AI’ is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people’s tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don’t fully persuade them on the first go, there’s a case to be made for improving people’s epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.