Why? ~All the other gov stuff I’m aware of that talks about “GCR” or that talks about AI in the context of “high-consequence [catastrophic] events, regardless of the low probability” cites Bostrom, MIRI, Ord, or Stuart Russell.
(But I agree they’re likely to have views closer to Superintelligence, Human Compatible, or The Precipice, rather than AGI Ruin. I just think of those views as pretty close to the Yudkowskian paradigm—eg, Bostrom is big on paperclippers and foom.)
Bostrom and MIRI being cited is pretty cool. I would have thought they’d be outside the Overton window.
EDIT: Do you know when the earliest citations occurred?
Why? ~All the other gov stuff I’m aware of that talks about “GCR” or that talks about AI in the context of “high-consequence [catastrophic] events, regardless of the low probability” cites Bostrom, MIRI, Ord, or Stuart Russell.
(But I agree they’re likely to have views closer to Superintelligence, Human Compatible, or The Precipice, rather than AGI Ruin. I just think of those views as pretty close to the Yudkowskian paradigm—eg, Bostrom is big on paperclippers and foom.)
Bostrom and MIRI being cited is pretty cool. I would have thought they’d be outside the Overton window. EDIT: Do you know when the earliest citations occurred?
E.g., Preparing for the Future of Artificial Intelligence and Wired in 2016.