This seems like a very important document that could support/explain/justify various sensible actions relevant to AI x-risk. It’s well-credentialed, plausibly comprehensible to an outsider, and focuses on things that are out of scope of mainstream AI safety efforts, closer to the core of AI x-risk, even if not quite there yet (it cites “damage in the tens of thousands of lives lost” as a relevant scale of impact, not “everyone on Earth will die”).
This seems like a very important document that could support/explain/justify various sensible actions relevant to AI x-risk. It’s well-credentialed, plausibly comprehensible to an outsider, and focuses on things that are out of scope of mainstream AI safety efforts, closer to the core of AI x-risk, even if not quite there yet (it cites “damage in the tens of thousands of lives lost” as a relevant scale of impact, not “everyone on Earth will die”).